L4CON but ports show open on nmap

kinda new to haproxy and could well be i am trying to do something that isn’t possible.

end goal is to have rdp roundrobin with fix max connection of 1 user per server with approx 130 server availble in pool

config is
global
debug

log         127.0.0.1 local2
log         127.0.0.1 local2 info
log         127.0.0.1 local2 notice
chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     2000
user        haproxy
group       haproxy
daemon

# turn on stats unix socket
stats socket /var/run/haproxy.stat

frontend ft_rdp
bind :3389
mode tcp
timeout client 18h
log global
tcp-request inspect-delay 2s
tcp-request content accept if RDP_COOKIE
option tcplog
default_backend bk_rdp

backend bk_rdp
balance roundrobin
option log-health-checks
option tcp-check
log global
timeout server 18h
timeout connect 4s
timeout check 900ms
default-server inter 60s rise 1 fall 3
server clone2 192.168.122.2:3389 maxconn 1
server clone3 192.168.122.3:3389 maxconn 1
server clone4 192.168.122.4:3389 maxconn 1
server clone5 192.168.122.5:3389 maxconn 1
server clone6 192.168.122.6:3389 maxconn 1
server clone7 192.168.122.7:3389 maxconn 1
server clone8 192.168.122.8:3389 maxconn 1
server clone9 192.168.122.9:3389 maxconn 1
server clone10 192.168.122.10:3389 maxconn 1
server clone11 192.168.122.11:3389 maxconn 1
server clone12 192.168.122.12:3389 maxconn 1
server clone13 192.168.122.13:3389 maxconn 1
server clone14 192.168.122.14:3389 maxconn 1
server clone15 192.168.122.15:3389 maxconn 1
server clone16 192.168.122.16:3389 maxconn 1
server clone17 192.168.122.17:3389 maxconn 1

servers are on kvm nat
full list goes to 131 and have tried to check on another port other that 3389 / 22 or 80 etc ,but that causes intermitant round robin which pauses if not hittng connection, having on same port does give required effect of giving all connections untill it runs out of active but we never seem to get full active list.

hatop gives 1 DOWN L4CON on effected hosts , rebooting them seems to have little effect but nmap is showing open ports

Nmap scan report for 192.168.122.8
Host is up (0.00077s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp open ms-wbt-server

restarting haproxy/server results in same host showing same issue and health check never seem to change status to active.

also note this servers will be recycled on exit of rdp session, so we expect to see Connection refused when rebuilding servers but when they are up expect to put in active queue.

logs
logs working as expected
2018-09-21T08:45:34+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone21 failed, reason: Layer4 timeout, info: " at initial connection step of tcp-check", check duration: 4000ms, status: 0/1 DOWN.
2018-09-21T08:46:34+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone21 failed, reason: Layer4 connection problem, info: “Connection refused at initial connection step of tcp-check”, check duration: 0ms, status: 0/1 DOWN.
2018-09-21T08:46:34+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone21 failed, reason: Layer4 connection problem, info: “Connection refused at initial connection step of tcp-check”, check duration: 0ms, status: 0/1 DOWN.
2018-09-21T08:46:34+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone21 failed, reason: Layer4 connection problem, info: “Connection refused at initial connection step of tcp-check”, check duration: 0ms, status: 0/1 DOWN.
2018-09-21T08:47:34+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone21 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
2018-09-21T08:47:34+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone21 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
2018-09-21T08:47:34+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone21 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
2018-09-21T08:47:34+01:00 localhost haproxy[31360]: Server bk_rdp/clone21 is UP. 23 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
2018-09-21T08:47:34+01:00 localhost haproxy[31360]: Server bk_rdp/clone21 is UP. 23 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
2018-09-21T08:47:34+01:00 localhost haproxy[31360]: Server bk_rdp/clone21 is UP. 23 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

fails but never reconnects on checks
2018-09-21T08:33:20+01:00 localhost haproxy[31360]: Health check for server bk_rdp/clone22 failed, reason: Layer4 connection problem, info: “Host is unreachable at initial connection step of tcp-check”, check duration: 3027ms, status: 0/1 DOWN.

Remove the option tcp-check, it is used for tcp-check send/expect sequences:

This health check method is intended to be combined with “tcp-check” command lists in order to support send/expect types of health check sequences.

thanks will give it a go

thanks for the advice they are now connected better, but still stay down if marked down. rebooting vms seems to work but very hit and miss