I’m starting a new thread to not necrobump an old thread.
I agree that there is a very legitimate use case for limiting the number of keepalive_requests in hapoxy.
In my use case, I use a dynamic number of haproxy servers, when they reach a certain CPU usage, I spin up new ones, and add the new friends to a DNS round-robin record.
The issue is that the existing clients stick to the old haproxy servers and the load does not get re-distributed. This issue can only be solved by a reload of the server, but that actually drops in-flight requests so that’s really not good.
There is also this solution but it seems like it’s also dropping connections (by acting closing the connection at the time of the client request).
Is there already a solution for this problem that I have missed?
I can guarantee that haproxy 2.8 drops keepalive connections to the backend and clients upon reload.
The use case is not from browsers, but from servers, and they happily keep connections open forever if you let them. They are from third parties though, so we have no control over their implementation details.