I’m having a little trouble figuring exactly how HAProxy works-- this is probably such a simple newbie question, but I can’t really find the answer easily in the docs.
Suppose HAProxy is configured for round-robin load balancing. If a client makes a connection to HAProxy, and then that client sends HTTP request #1. HAProxy routes the message to a selected backend server, and the server responds with response #1. Now let’s say the client keeps the connection open (does not close the socket), and then sends HTTP request #2 on the same socket. The question is, since it is the same socket connection, will HAProxy send it to the same backend server as request #1? Or, will it move to the next round-robin backend server for each subsequent request on the same socket connection?
I don’t think we can simplify those things with a statement like “generally connection-based”, it’s too complicated for that.
This depends on a number of factors, including keep-alive mode, load-balancing algorithm and stickiness mode and any other configuration that may influence routing decisions like content switching.
In regards to your question: when Haproxy is in keep-live mode, load-balancing alg is round-robin, and the client makes another requests in the same TCP session, the new transaction is still subject to round-robin balancer, that is it will likely hit a different server, closing the existing connection to the previous server.
But in case you would like haproxy to stick to the same server in this case, you can configure the option prefer-last-server option.
when the client makes another requests in the same TCP session, the new transaction is still subject to round-robin balancer, that is it will likely hit a different server, closing the existing connection to the previous server.
Oh, that would be terrible. Our client is not a web-browser, but rather is a mobile app that opens a single socket to our server, and then sends many HTTP requests on that single socket over the course of the application’s run time. Our server stores “state” for the socket connection, and assumes each socket is a “session”. We definitely wouldn’t want HAProxy taking our single socket connection and breaking it up into many different little connections, one connection per request. Yikes.
How does HAProxy even know where a request beings and ends? Sure, in a well-known protocol like HTTP, it’s easy for HAProxy can find the boundaries of each request/response, but what if an application uses a different protocol than HTTP? How is HAProxy going to determine where each request begins and ends? TCP is a “stream”, and doesn’t have natural packet boundaries.
I’m sure HAProxy will handle all these various scenarios, but I guess I’m confused where I can go and read about the common use-cases. The HAProxy documentation seems to be organized as a giant list of configuration commands, and you have to read all the configuration commands and then piece together how to put them together to create a common use-case. Instead, is there a recommended tutorial that starts with common use-cases and explains how to configure HAProxy to handle them?
That’s not HTTP compliant then (just like NTLM auth).
Then, like I said, all you have to do is enable the prefer-last-server option.
Everything we are talking about here is exclusively related to HTTP and haproxy being in HTTP mode.
There are 2 main modes in haproxy, HTTP and TCP mode. Of course keep-alive mode only applies to HTTP traffic. There is no concept of transactions in TCP mode and therefor, no keep-alive, no round-robin on a transaction basis, etc. In TCP mode, a frontend socket is connected to a backend socket and haproxy does not know anything about requests/responses.
I would suggest the Haproxy Starter Guide, it should give you a good high level understanding of how haproxy operates. Hardcore details will be in the configuration guide however.
In what way is a persistent connection not HTTP compliant? HTTP 1.1 connections are persistent by default. I would have assumed that a proxy would honor that by default. I did read “Haproxy starter guide” earlier, but this kind of information isn’t really there. I mean everything talks about persistence in terms of sticky “sessions”, but nothing really acknowledges that a socket can be open for more than one request/response message.
It sounds like the easiest way to get the proxy to honor persistent connections through to the backend is to just use TCP mode.
The idea of enabling “prefer-last-server” only leads to more questions. I.e., does the backend connection get opened and closed for each message? Preferring the last server sounds like it will route consecutive messages to the same backend server, but that each message will be in it’s own connection (which doesn’t do what we want). I guess the devil is in the details, so probably time to just spend a few days and digest the whole configuration guide until I can piece together the various use-cases I have in mind.
HTTP persistence just means that you CAN have multiple transactions per TCP connection - so it’s really HTTP keep-alive. And like you say that is what HTTP/1.1 does by default, that includes Haproxy and many other HTTP proxies, servers and clients. However it DOES NOT guarantee that your entire application session will be in one single connection.
Attaching application data (like authentication details or other session related data) to a specific connection is not HTTP compliant, because it will break when that connection is closed (either on the server side, or the client side), and this is exactly what Microsoft did with NTLM and why it requires special treatment.
If you go down this road, you will have problems with HTTP proxying ever step of the way. You can’t use a CDN, you can’t use most of the HTTP proxies, etc. There are workarounds for this in haproxy, so your application won’t brake. But you are not compliant with the HTTP standard and your entire setup requires specific non-standard tweaks in order to not break your application.
For example, you would not be able to put this behind Cloudflare (this is just a example of what a quick google search turned up):
Yes indeed in our discussion the term persistence is ambiguous. “HTTP persistence” is HTTP keep-alive, allowing multiple transactions per connection. But when you see the term persistence in the documentation it is really not about keep-alive, but about (application) session persistence (backend server stickiness).
When you are looking for multiple transactions per connection, that is really what we call “(HTTP) keep-alive”.
And when I say transaction what I mean is a request and its corresponding response (just so we are clear on that term).
No, the entire point of this option is to combine it with (the default) keep-alive mode, and it does not close the connection. I’ve linked to the documentation about this option in my first reply and the documentation will respond these questions. It’s a soft preference change though, intended to enhance keep-alive efficiency, not to workaround layering violations such as NTLM … (see below).
No, that would be server stickiness, and could be referred to in the document with the term persistence (which again doesn’t actually refer to HTTP persistence = keep-alive, but backend server persistence, as in “this application session must be sticky to server 123, otherwise the application will break”).
Again prefer-last-server is a “soft” preference change, meaning that haproxy can still break the connection for a variety of reasons. You can use option http-tunnel to disable the entire keep-alive logic in haproxy after the first transaction, which basically transforms it into a TCP tunnel afterwards. Or your can just use TCP mode.
The best configuration for you depends on what features you actually require from haproxy. If you just want to connect a TCP port on one side to a TCP port on the other side, than TCP mode is what you should use. If you require any HTTP related features, like compression, ACLs, content switching, etc you will require HTTP mode (and not http-tunnel either, as that one only considers only the first transaction of the connection).
This is all very interesting-- I appreciate the discussion. There’s no doubt that we’re not using HTTP with typical semantics. In fact, the connection between our clients and servers can be more accurately viewed like the connection between a MySQL client and the MySQL database. It’s a classic client/server single-socket connection with a purpose-built protocol. In our case, we simply embed our real request messages (XML fragments) inside HTTP POST content bodies, and expect our real responses to come in HTTP response content bodies (again, XML fragments).
A CDN offers no value to our application, since all the communications are live dynamic data and can’t be cached. In fact, HTTP itself offers no value to our application, and is not our primary protocol. You might ask why are we even using HTTP at all? Well, it turns out that it allows passage through our customer’s fussy layer-7 firewalls (such as Zscaler) that require extensive DPI, and which only understand HTTP, and which are configured by paranoid IT admins (including the great firewall of China). Personally I hate HTTP and it’s a poor fit for our application, but apparently a non-trivial percentage of admins in the corporate IT world have decided that everything must be HTTP or you are blocked.
Anyways, I’m thinking our proxy service should just sit at layer 4, because I think that’s where it belongs in our case.
Yes, mode TCP would work fine, just like http mode with “http-tunnel”. If you really don’t need any of the HTTP features, TCP mode makes certainly more sense.
Do consider that a forced HTTP intercepting proxy on the customer side could still impact your application, because it may split up your HTTP request to multiple connections, just as a reverse proxy would.