Cannot access kubernetes service with haproxy-ingress

I have followed the guide at Installation | Community Installation Guide | Install outside of Kubernetes | Enable external mode for an on-premises Kubernetes installation | HAProxy Enterprise Kubernetes Ingress Controller 1.9

I am able to access my services when they are in LoadBalancer mode. When i switch to ClusterIP to try and use the BIRD peer, i cannot.

I am using --tcp-configmap to create TCP port bindings on my haproxy server. I have the echo-service test app running at http://107.161.173.97:8082/ . You can see EMPTY RESPONSE

http://107.161.173.97:1024/ shows all 3 servers disconnected

How do i begin to troubleshoot the connectivity issues between my haproxy server and the BIRD/calico nodes?

Been stuck on this for days… really need it to work

app-service.yaml:

apiVersion: v1
kind: Service
metadata:
    name: echo-service
spec:
    selector:
      run: echo
    ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: 8080

app.yaml

apiVersion: v1
kind: Service
metadata:
    name: echo-service
spec:
    selector:
      run: echo
    ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: 8080

root@vps:~# cat app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: echo
  name: echo
spec:
  replicas: 3
  selector:
    matchLabels:
      run: echo
  template:
    metadata:
      labels:
        run: echo
    spec:
      containers:
      - name: echo
        image: jmalloc/echo-server
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1

tcp-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-configmap
  namespace: default
data:
  8082:                    # Port where the frontend is going to listen to.
    default/echo-service:8080

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: echo-ingress
    annotations:
        haproxy.org/path-rewrite: "/"
spec:
    rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: echo-service
              port:
                number: 8080

Is the ingress object still required if youre using the tcp-configmap flag? I read on the github repo for haproxy-ingress from repo owner and said it was not needed when using tcp-configmap

master node:

root@vps:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 107.161.173.97 | global            | up    | 21:12:10 | Established |
| 107.161.173.98 | node-to-node mesh | up    | 20:22:50 | Established |
| 107.161.173.84 | node-to-node mesh | up    | 20:22:51 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

root@vps:~# 

haproxy server:

root@proxy:~# birdc show route
BIRD 1.6.8 ready.
172.16.221.64/26   via 107.161.173.98 on ens3 [bgp2 16:12:10] * (100) [i]
172.16.86.192/26   via 107.161.173.84 on ens3 [bgp3 16:12:10] * (100) [i]
172.16.39.0/26     via 107.161.173.73 on ens3 [bgp1 16:12:10] * (100) [i]
root@proxy:~# birdc show protocol
BIRD 1.6.8 ready.
name     proto    table    state  since       info
bgp1     BGP      master   up     16:12:10    Established   
bgp2     BGP      master   up     16:12:10    Established   
bgp3     BGP      master   up     16:12:10    Established   
kernel1  Kernel   master   up     16:12:08    
device1  Device   master   up     16:12:08    
root@proxy:~# 

tried a more basic ingress.yaml:

kind: Ingress
metadata:
    name: echo-ingress
spec:
  defaultBackend:
    service:
      name: echo-service
      port:
        number: 8080

ip route table on haproxy server:

root@proxy:/tmp/haproxy-ingress/etc# ip route
default via 107.161.173.1 dev ens3 proto static 
107.161.173.0/24 dev ens3 proto kernel scope link src 107.161.173.97 
107.161.173.1 dev ens3 proto static scope link 
172.16.39.0/26 via 107.161.173.73 dev ens3 proto bird 
172.16.86.192/26 via 107.161.173.84 dev ens3 proto bird 
172.16.221.64/26 via 107.161.173.98 dev ens3 proto bird 
root@proxy:/tmp/haproxy-ingress/etc# 

found the config for haproxy:

  server SRV_1 172.16.86.194:8080 enabled
  server SRV_2 172.16.221.67:8080 enabled
  server SRV_3 172.16.221.66:8080 enabled

tried to curl these from inside haproxy server and failed. seems like the routing is proper, but doesnt seem to be routing properly maybe?

UPDATE:

I was able to do some route testing. I found whenever I ping any of the nodes on the :8080 port from my haproxy server, it does seem to route traffic to the proper calico node/pod. I was able to see the ICMP request hit each server with tcpdump command.

It would seem that whenever the haproxy server connects to the calico node/network, the packets get dropped at the calico network interface.

Issue resolved. my server provider was blocking spoofed requests. had them remove the rule and now working.