Keepalive_requests equivalent

I’m starting a new thread to not necrobump an old thread.

I agree that there is a very legitimate use case for limiting the number of keepalive_requests in hapoxy.

In my use case, I use a dynamic number of haproxy servers, when they reach a certain CPU usage, I spin up new ones, and add the new friends to a DNS round-robin record.

The issue is that the existing clients stick to the old haproxy servers and the load does not get re-distributed. This issue can only be solved by a reload of the server, but that actually drops in-flight requests so that’s really not good.

There is also this solution but it seems like it’s also dropping connections (by acting closing the connection at the time of the client request).

Is there already a solution for this problem that I have missed?

Closing the connection after a successful HTTP transaction is exactly what you are requesting here, but the Connection: close solution works for H1 only.

There are requests on the issue tracker for implementing this in H2, etc:

[quote=“fguer, post:1, topic:9403”]
The issue is that the existing clients stick to the old haproxy servers and the load does not get re-distributed.[/quote]

If we are talking about browsers/HTTP, you should be able to handle this by properly configuring timeouts.

If we are talking TCP with some like reverse proxying MSRDP terminal clients, than spinning proxies instances up and down is somewhat of a wrong design.

No, a reload of haproxy certainly does NOT drop in-flight request, unless the configuration is wrong or you are hitting a bug.

That is the job of a reload as opposed to a restart: not dropping transaction despite starting up a new instance.

Thank you for your reply.

I can guarantee that haproxy 2.8 drops keepalive connections to the backend and clients upon reload.

The use case is not from browsers, but from servers, and they happily keep connections open forever if you let them. They are from third parties though, so we have no control over their implementation details.

That’s expected behavior of course, after all, we want a soft stopping process to start closing connections.

However this may take seconds, minutes or even hours when connections are continiously making traffic.

They amount of time in this soft close state can be limited by configuring hard-stop-after.

No connections are “killed” unless hard-stop triggers or a haproxy is actually stopped/restarted as opposed to be reloaded.

Again that is the expected behavior. You can hit a bug or a misconfiguration, but the fact of the matter is that a reload doesn’t blindly kill connections.

But this is not really your use case anyway, in your use-case you’d want to the shutdown session feature for example:

Thank you very much for your help @lukastribus .

The solution for me was disabling h2 and randomly closing connections.

I look forward to issue 969 to be fixed.

I also maintain that a “proper” implementation of “keepalive_requests” would be useful.

Kind regards,F