I’ve a frontend like this:
frontend my_frontend mode http option httplog bind 10.0.0.96:80 bind 10.0.0.96:82 acl downtime nbsrv(my_backend) lt 1 use_backend downtime_backend if downtime default_backend my_backend
and 2 backends are:
backend downtime_backend mode http balance source option httpchk http-check expect ! rstatus ^5 option forwardfor option http-server-close http-request replace-value Host (.*):.* \1 server downtime 192.168.1.10:80 check inter 5000
backend my_backend mode http balance roundrobin option httpchk GET /index.html option forwardfor option http-server-close option redispatch timeout server 300000 server server1 10.0.0.225:80 check inter 5000 server server2 10.0.0.226:80 check inter 5000 server server3 10.0.0.227:80 check inter 5000
In practice we use an acl based on
nbsrv to serve a page saying “The service is under maintenance, be patient…” when all servers in backend are down.
This rule works very well and let us keep statistics on requests redirected to downtime pool…but…we have a problem during reload.
We are on haproxy 1.8.5, and configuration has
stats socket /tmp/haproxy_http level admin expose-fd listeners directive. Executing a reload we see in log that the old process balanced some requests to downtime pool. It looks like some connections are served by the old process when it has already stopped all backends.
Do someone have the same problem ?
Can you please help me ?