Nbsrv acl and seamless reload

Hi,

I’ve a frontend like this:

frontend my_frontend
        mode http
        option httplog
        bind 10.0.0.96:80
        bind 10.0.0.96:82

        acl downtime nbsrv(my_backend) lt 1
        use_backend downtime_backend if downtime
        default_backend my_backend

and 2 backends are:

backend downtime_backend
        mode http
        balance source
        option httpchk
        http-check expect ! rstatus ^5
        option forwardfor
        option http-server-close

        http-request replace-value Host (.*):.* \1

        server downtime 192.168.1.10:80 check inter 5000

and:

backend my_backend
        mode http
        balance roundrobin
        option httpchk GET /index.html
        option forwardfor
        option http-server-close

        option redispatch

        timeout server 300000

        server server1 10.0.0.225:80 check inter 5000
        server server2 10.0.0.226:80 check inter 5000
        server server3 10.0.0.227:80 check inter 5000

In practice we use an acl based on nbsrv to serve a page saying “The service is under maintenance, be patient…” when all servers in backend are down.
This rule works very well and let us keep statistics on requests redirected to downtime pool…but…we have a problem during reload.

We are on haproxy 1.8.5, and configuration has stats socket /tmp/haproxy_http level admin expose-fd listeners directive. Executing a reload we see in log that the old process balanced some requests to downtime pool. It looks like some connections are served by the old process when it has already stopped all backends.

Do someone have the same problem ?
Can you please help me ?

Thanks
andrea

We have the same problem on haproxy reload. Apparently we found while reload there are two processes running as well as serving requests. Old process continues serving old requests while new one takes new request, but this breaks a lot of rules that you configure in haproxy config. In our case we have maxconn set to 1 for each backend server while reload ha-proxy with 2 processes serves 2 connection per backend server. No solution for now …

In my case the solution is to add an acl using “stopping” sample and use it with the other one.
My frontend now is:

frontend my_frontend
        mode http
        option httplog
        bind 10.0.0.96:80
        bind 10.0.0.96:82

        acl downtime nbsrv(my_backend) lt 1
        acl stopping stopping eq true

        use_backend downtime_backend if downtime !stopping
        default_backend my_backend

Now everything works like a charm.
I know in your case it’s different…but maybe this solution can help you however :slight_smile:

1 Like