Is there any equivalent of NGINX's `keepalive_requests` in HAProxy

We have multiple instances of HAProxy deployed. We have noticed that some clients send a large number of HTTP requests over a single persistent connection. It overwhelms one of the HAProxy instances while the others are sitting idle. Is there any way to limit the number of HTTP requests to be sent over one persistent connection (i.e: return Connection: close in the response after N requests)

NGINX keepalive_requests: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

No, there is not.

Nginx implements this limit because some functionalities of the webserver are unable to free memory, unless the connection is released, however this is not a problem for haproxy.

However it still causes unbalanced number HTTP requests among the HAProxy instances. In addition to that, since the frontend connection is strongly tied with server connection in HAProxy, even if HAProxy can deal with it; the backend servers might be unable to free memory (like apache, tomcat etc). It is desirable to have such an option.

You can do round-robin per request if you want an equal number of HTTP requests on each server. Haproxy is completly configurable in this regard.

Then the backend server should implement this limit. If your backend server is so buggy that it causes memory problems without a load-balancer, fixes should be implement in that server, instead of requesting workarounds at the proxy layer.

I disagree with this. I think this option does not belong in haproxy.

You can do round-robin per request if you want an equal number of HTTP requests on each server. Haproxy is completely configurable in this regard.

As I mentioned in the original question, there is an in-balanced load on HAProxy, no in the backend servers (one HAProxy has 100% CPU while other instances are idle).

Thanks for the answer anyway.

Well you did not share any information about how you deploy those different haproxy instances, you are just saying you have multiple instances.

If you want to work on that problem, elaborate on your design.