Frontend and Backend Keepalives

I am new to HAProxy and in reading the documents so far, can’t seem to determine if what I need to do is possible. Basically, I want to completely separate the front end from the back end. I want the back end to use keep-alive (unless the server sends a close, then go ahead and close the connection but do not pass that connection closure to the client). I want the default client behavior to be keep-alive but if the client sends Connection: close, then close the client connection after the response (but leave the server connection alive).

Basically I need complete separation of client and server. The nature of the requests are not a web browser, they are individual atomic HTTP transactions with no session state. I would like to use connection reuse on the backend and no direct relationship between backend side connections and frontend connections. There are two sorts of front end connections. One is from an automated process that is pumping out http transactions (API calls) and I want those transactions spread across a bank of servers that are a long distance away (connection setup + TLS handshake is extremely expensive due to network latency). The second sort of transaction is generated by a human action and is a one-off request from an application the person is using. This application will send a Connection: close header with its request.

The backend servers do occasionally send a Connection: close header but I do not want this to close the client connection on the frontend. Just go ahead and close the backend connection and possibly open another if I need to to fulfill client requests and a reusable connection isn’t available.

In summary I need:

Keepalive to the backend unless the server closes the connection but do not pass this closure to the client. Keepalive to the front end unless the client requests closure but do not pass this closure request to the server.

The documentation isn’t quite clear on this and I am not yet convinced this is possible with the version I am using (1.8).

Thank you for your consideration.

Clarification: The front end hosts are either posting a transaction or querying one by https There is no session state and each https request/reply stands alone. There would be no reason to have any front end host associated with any back end connection as any of the backends can take any request from the front ends. It wouldn’t matter if four consecutive requests from a front end host all went to different back end servers. This would look like a connection pool to the back end and any front end transaction could use any back end connection. The part that I am unsure about is that I want something like keep-alive on the back end and fake keep alive on the front end but the documentation seems to imply that the keep alive behavior set on one side influences the behavior on the other side and different settings on either side might be mutually exclusive. If I have fake keep alive on the front end and the client asks for Connection: close, the documents imply that the Connection: close will be passed to the server.

I have tried 2.0.8 and have done a benchmark test and it doesn’t seem to be any better. It says it does not close the server connection when the client connection “disappears” but it does appear that a client will always use a new connection. This is a benchmark test with two different ab tests running concurrently. One uses keepalive, and pumps 200 requests over 4 concurrent connections to haproxy to a server halfway around the world. The other does not use keepalive and sends 15 connections sequentially to the same server. They are pulling a 3-byte static text file and both are running at the same time. It is obvious that queries from the second group are opening a new connection to the server and are not being sent over the connections already opened by the first group.

ab -n 200 -c 4 -k https://api-proxy/file.txt&
ab -n 15 https://api-proxy/file.txt&

For the first group I get:
Percentage of the requests served within a certain time (ms)
50% 214
66% 215
75% 217
80% 219
90% 226
95% 261
98% 870
99% 893
100% 945 (longest request)

So it is obviously reusing connections. There are 4 servers and 4 connections took the long period of time to open, first connection to each server.

Second group:

Percentage of the requests served within a certain time (ms)
50% 863
66% 884
75% 888
80% 896
90% 904
95% 906
98% 906
99% 906
100% 906 (longest request)

Obvious here that not a single one of those requests used a connection open by one of the previous group. At first I thought, okay, maybe they are all busy so I ran them sequentially since I have a 300 second keepalive on the server side, after the first group runs there should be at least four idle connections that have already been reused to the servers. Still no dice:

Percentage of the requests served within a certain time (ms)
50% 863
66% 884
75% 888
80% 896
90% 904
95% 906
98% 906
99% 906
100% 906 (longest request)

Every single one of the non-keepalive clients uses a brand new connection to the server requiring a very long TLS handshake to set up.

The backend config is:

backend api-bk
default-server resolvers api
option http-keep-alive
http-reuse always
balance leastconn
option http-keep-alive
timeout http-keep-alive 299s
server-template api 4 test-api.domain.net:443 check port 443 inter 10s
resolvers api init-addr none ssl ca-file DigiCert_Global_Root_CA.pem

I’m just trying to figure out if there is any way to get a client on the front end that does a Connection: close to have the request go over an already open connection to the server (and without passing the close header to it).

I’ve been doing some more learning here. A few things: First of all it appears that haproxy never cleanly closes a backend connection (just sends a RST, never does the proper FIN closure. On receipt of a FIN, just sends a RST). The behavior is as if haproxy is simply deleting the socket rather than properly closing it and then deleting it. I manage to catch these because the backend has tcp keepalive running at 60 second intervals and that acts to reap them.

The connection pooling only seems to work for the same client. I’m basically using this in more of a load balancing situation than a proxy situation. The desire is to retain a pool of connections to a server farm (these are the only servers used by this configuration and all connections in on the front end are going to go to this specific farm) and allow any request from any client to use them.

There are actually three different sorts of client profile, not two as I originally thought. First type is the one that does keepalive connections using HTTP1.1. Those for the most part seem to work as expected. The second sort is the one-off API call that is done from a user acting interactively with some other program. This one uses HTTP1.1 but with “Connection: close” and I do want the client connection closed but it would be most efficient for the request to go to the server farm over one of the idle pool connections. The third sort is the one that was really the driver of all of this (and I can do this with a commercial load balancer but I don’t want to put one in every branch office and partner location). These are units that drive a high volume of transactions, the application is a Java program. It opens a connection using HTTP1.1 with Connection: close. Sends a single request, closes the connection, and immediately opens another. It might at any given time have a dozen of these connections open. Even worse, some of these rely on the server closing the connection. If it ignores the connection close the client boxes never close the connections themselves and can eventually run the office firewall out of available connections. So closing the connection to the client when the client has a Connection: close request is desired but closing the SERVER connection is not desired. It still seems that backend server connections have an association with a client connection.

Maybe what I really want is for a different mode of operation. Rather than being in “connection” based mode, I want a “request” based mode of operation where each individual request is an atom. There is no relationship between requests, no session state. One simply needs to keep track of which client stream the response data goes to and when it is done, disconnect the client from the backend connection and put it back in the pool. (once the transaction is complete from the client side perspective and the connection to the client is closed, the server connection is placed back in the pool).

The behavior I am seeing is that if a client connects with a Connection: close transaction, a new connection to the server is built and torn down when the request is complete without even looking to see if there is an idle available connection to the same server in the connection pool. This is the sort of connection behavior I am trying to avoid. An office or client making API calls from Singapore to the US or from India to Europe has a lot of transaction latency and most of that is with the TLS negotiation. If I can make that negotiation happen with a device closer to them and shoot the request over an already established but idle connection, everyone wins.

Changes since original posting: Changed from 1.8 to 2.0, added pool-max-conn 12 pool-purge-delay 299s to the back server template.

Positives: Since we do health checking on the servers and remove them from the DNS response if they fail, the capability to do DNS resolving for dynamic server creation is a nice win and I don’t really need to do health checking from haproxy. It also makes failing over clean if we need to direct traffic to a different site for some reason.

Nit: The socket closure thing. Would be a lot cleaner to properly close the socket connections before killing them.

Are you sending TLS SNI to those backend servers? Because one condition for reusing sessions is that it must not be marked private, which happens if SNI is send:

  • connections sent to a server with a TLS SNI extension are marked private
    and are never shared;

I think you can try both connection pooling and “option prefer-last-server”.

1 Like

No, these are not SNI. Only one host name one cert, and it doesn’t have an SNI configuration and looking at a packet capture of the TLS handshake, no server name extension is being sent. The CLIENT however, is sending a server name extension field but it is for the same name as the local haproxy box but no such extension is going to the back end servers.

I just tried it with straight HTTP, same behavior. Client closes the connection, the server side closes. Doesn’t leave the server side open to be used again. Basically connection-reuse doesn’t appear to work.

The fundamental problem is that when a client sends a request with a Connection: close header, the back end connection is closed with the front end. I don’t want that behavior. I want to leave the back end connection open for use with another front end client. I don’t want the Connection: close even passed to the back end server at all, I want that connection kept alive. I do have option http-keep-alive set but it doesn’t seem to make any difference when a client sends “Connection: close”

Did you ever arrive at a working configuration, @Grep?

I’ve been using HAProxy sandwiched between other middleware in the same data center and I never bothered to dig into the connection behavior. Now I’m attempting to use it over a wide area and I’m paying closer attention to the network activity. I found exactly the same behavior you described.