Haproxy 2.2.0 sends sessions to backend with state DOWN

Hello,
I am running 2 replicas of haproxy 2.2.0 in a docker swarm.
The scenario is the following:
I have a backend that includes 10 servers. 3 of them are UP and running (service 0, 1 and 2) and at some point a new service is created (service3) which it’s port 4902 is not listening until some loading occurs (takes a few minutes)

The configuration is the following:
frontend frontend_service_rtsp
bind *:4902
mode tcp
option tcplog
default_backend backend_service_rtsp

backend backend_service_rtsp
  mode tcp
  balance leastconn
  option tcp-check
  server service0_rtsp service0:4902 check resolvers docker init-addr none,last,libc
  server service1_rtsp service1:4902 check resolvers docker init-addr none,last,libc
  server service2_rtsp service2:4902 check resolvers docker init-addr none,last,libc
  server service3_rtsp service3:4902 check resolvers docker init-addr none,last,libc
  server service4_rtsp service4:4902 check resolvers docker init-addr none,last,libc
  server service5_rtsp service5:4902 check resolvers docker init-addr none,last,libc
  server service6_rtsp service6:4902 check resolvers docker init-addr none,last,libc
  server service7_rtsp service7:4902 check resolvers docker init-addr none,last,libc
  server service8_rtsp service8:4902 check resolvers docker init-addr none,last,libc
  server service9_rtsp service9:4902 check resolvers docker init-addr none,last,libc

The server service3_rtsp gets the state UP since the service FQDN gets resolved, the healthcheck occurs and then the server service3_rtsp gets the state DOWN.
After that, haproxy sends sessions to that server even it is in state DOWN.

This is the output from the haproxy service logs:

haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <145>Aug  5 11:17:24 haproxy[56]: Server backend_service_rtsp/service3_rtsp is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-check", check duration: 1ms. 3 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:25 haproxy[56]: 10.0.0.7:60896 [05/Aug/2020:11:17:22.623] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3009 0 SC 296/149/148/0/3 0/0
haproxy_haproxy.1.3o76uo98mlnc@worker000002    | <150>Aug  5 11:17:25 haproxy[56]: 10.0.0.7:60906 [05/Aug/2020:11:17:22.871] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3011 0 SC 314/148/147/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:25 haproxy[56]: 10.0.0.7:60904 [05/Aug/2020:11:17:22.862] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3015 0 SC 297/151/150/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:25 haproxy[56]: 10.0.0.7:60908 [05/Aug/2020:11:17:22.901] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3014 0 SC 296/150/149/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60912 [05/Aug/2020:11:17:23.034] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3009 0 SC 296/149/148/0/3 0/0
haproxy_haproxy.1.3o76uo98mlnc@worker000002    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60914 [05/Aug/2020:11:17:23.085] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3009 0 SC 311/146/145/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60920 [05/Aug/2020:11:17:23.260] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3012 0 SC 294/148/147/0/3 0/0
haproxy_haproxy.1.3o76uo98mlnc@worker000002    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60922 [05/Aug/2020:11:17:23.322] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3013 0 SC 309/144/143/0/3 0/0
haproxy_haproxy.1.3o76uo98mlnc@worker000002    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60926 [05/Aug/2020:11:17:23.400] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3007 0 SC 308/142/141/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60928 [05/Aug/2020:11:17:23.419] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3007 0 SC 292/147/146/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60932 [05/Aug/2020:11:17:23.500] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3013 0 SC 292/147/146/0/3 0/0
haproxy_haproxy.1.3o76uo98mlnc@worker000002    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60934 [05/Aug/2020:11:17:23.541] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3007 0 SC 308/141/140/0/3 0/0
haproxy_haproxy.1.3o76uo98mlnc@worker000002    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60938 [05/Aug/2020:11:17:23.780] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3010 0 SC 305/139/138/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:26 haproxy[56]: 10.0.0.7:60940 [05/Aug/2020:11:17:23.879] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3009 0 SC 289/145/144/0/3 0/0
haproxy_haproxy.1.3o76uo98mlnc@worker000002    | <150>Aug  5 11:17:27 haproxy[56]: 10.0.0.7:60942 [05/Aug/2020:11:17:24.010] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3016 0 SC 306/139/138/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:27 haproxy[56]: 10.0.0.7:60948 [05/Aug/2020:11:17:24.210] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3014 0 SC 290/144/143/0/3 0/0
haproxy_haproxy.2.4j9r7bctyl20@worker000001    | <150>Aug  5 11:17:27 haproxy[56]: 10.0.0.5:58078 [05/Aug/2020:11:17:24.405] frontend_service_rtsp backend_service_rtsp/service3_rtsp 1/-1/3013 0 SC 290/142/141/0/3 0/0

Is this a normal behaviour?

Thank you in advance!

2.2.0 has a number of bugs. Please upgrade to 2.2.2 first of all.

I tried 2.2.2 and 1.8.26 but still, when the service is resolvable, I still see connection errors, before the 4902 port starts listening.

is there something I am doing wrong with my config perhaps? or is this normal behaviour?

Anyone?

I suggest you file a bug if no one is able to respond here: