When I add more than 100 servers e.g. 150 into the only one backend, the health check state is DRAIN, but when I reduce to no more than 100 e.g. 100, it’s UP.
Is this normal logic for haproxy?
haproxy version: 1.8.3
The partial output logging:
Jan 26 11:32:04 localhost haproxy[2018]: Health check for server nginx_nginx-80-servers/nginx_nginx_85d113086d3fb578da40c194b8fd583fae5b2452f7cbad637737244ef0635738_80 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 DRAIN.
Jan 26 11:32:04 localhost haproxy[2018]: Health check for server nginx_nginx-80-servers/nginx_nginx_88e4819756a7787ae6a6623196b9750de1862bb780167d3c135e483b84982c04_80 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 DRAIN.
Jan 26 11:32:04 localhost haproxy[2018]: Health check for server nginx_nginx-80-servers/nginx_nginx_89dbcb2a67843c1bb9c1b0ecd16dd6610051bce6fc5c5e76bc53fd19ef51d425_80 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 DRAIN.
Jan 26 11:32:04 localhost haproxy[2018]: Health check for server nginx_nginx-80-servers/nginx_nginx_8ae01f7d258b116870cbda5be591913aad7d79b94821ef5607ff12e4f10e4d15_80 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 DRAIN.
Jan 26 11:32:04 localhost haproxy[2018]: Health check for server nginx_nginx-80-servers/nginx_nginx_8c327f4d456bec25d615f7a107a483353962ecc54a8434563c4b5a6f3c4fd403_80 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 DRAIN.
time="2018-01-26T11:32:04+08:00" level=info msg="reloading haproxy | current pid: 1, master pid: 19"
[WARNING] 025/113203 (19) : Reexecuting Master process
Jan 26 11:32:04 localhost haproxy[2018]: Health check for server nginx_nginx-80-servers/nginx_nginx_8f1f7d5313626d5b32d17b0fe2e4aa2fc1b853c699c8177a86c69dced0974531_80 succeeded, reason: Layer4 check passed, check duration: 1ms, status: 3/3 DRAIN.