Not balancing in a round'n'robin behaving

Hi Folks, hope you all are doing well.

I’ve setup a HAproxy instance to balance two “identical” servers.

The setup is working, but when I execute stress tests on that the sup07 (the balancer), looks like machine sup05 gets two times more requests than machine sup06.

The setup is:

[root@sup07 ~]# cat /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000

frontend Calls_Pool_Diagnostics
bind *:8680/RTDM/PoolDiangostics.jsp
default_backend Balances_Pool_Diagnostics
backend Balances_Pool_Diagnostics
balance roundrobin
server sup05 sup05.brzsupport.sashq-d.openstack.sas.com:8680/RTDM/PoolDiagnostics.jsp check
server sup06 sup06.brzsupport.sashq-d.openstack.sas.com:8680/RTDM/PoolDiagnostics.jsp check

frontend Calls_RTDM_Event
bind *:8680/RTDM/Event
default_backend Balances_RTDM_Event_requests

backend Balances_RTDM_Event_requests
balance roundrobin
server sup05 sup05.brzsupport.sashq-d.openstack.sas.com:8680/RTDM/Event check
server sup06 sup06.brzsupport.sashq-d.openstack.sas.com:8680/RTDM/Event check

frontend HAProxy_page
bind *:80
stats uri /haproxy?stats
backend http_back

Could you help me understand why is it behaving like this?

I appreciate your help.

Take into account that HAProxy does implement HTTP keep-alive both for the frontend and backend side of the connections. Therefore depending on how you benchmark (which could also be using keep-alive) it might skew the results.

However in my experience if one uses the option forceclose in the backend, then an almost perfect round-robin behaviour emerges. (Although I would use this option only with backends that target a service on the same machine as the balancer.)