Newbie on HAPROXY - RDP load balance and stickiness vs maxconn

Hello community,

I am facing an issue with HAPROXY and I would appreciate your help.

I have an Ubuntu running HAPROXY. I use it to load balance RDP connections to a terminal servers farm. Due to internal restrictions, each terminal server should host up to a maximum of 10 RDP connections. it is also vital that a disconnected session will resume the same session and not start a new one on another server.

The configuration uses roundrobin (I’ve tried leastconn as well) and stick-on-src to address the continuity of a session.

The issue is that the maxconn requirement collides with the stickiness. I will explain with an example of users behind NAT (differernt branch):
If I use stick-on-src and stick-table with minimal expiry time, it enables connections from behind NAT to jump to other terminal servers once a server has reaced its maxconn limit, but at a cost of opening duplicate sessions on different servers once users disconnect and reconnect again.

If I use higher expiry time for stickiness, users behind NAT experience connections issues once the maxconn limit of a server has been reached, and new sessions can’t be established. HAPROXY will continuously try to connect the session to the server bound by stickiness (until the stickiness expires or the servers with maxconn limit reached are marked as disabled/down).

I read about various solutions for this situation but with no avail.
I’ve just started using this product and wouldn’t even know how to implement some of the solutions presesnted.

Best regards.

Well this is quite an uncommon but yet interesting setup

Thank you for correctly describing your use-case, helps a lot :slight_smile:

To force the persitence to break when the server reaches its maxconn in your case, I would recommend using the maxqueue server keyword: it defaults to 0 which means unlimited queue (which explains why in your case the persistence never breaks and connection hangs up). Here you could set it to a value > 0 and try to find a proper balance between queueing some requests hoping that the server will quickly release some slots or breaking the persistence and send upcoming request to another server.

Hope this helps

Hello @adarragon,

First, thank you so much for your reply and time.

I’m not sure I quite understood what you’re referring to.
As I mentioned, I am new to this and I learn as I go.
I’ve looked at the explanation in the link, and got a bit lost.

Do you have a short example of how to use it and the place of it in the configuration file?

Best regards

No problem, in your config you probably defined a few servers lines in your backend section, which start with the server keyword, like this:

backend myback
   server     srv1   srvname:80

After the name:port on your server line you can specify optional server options. Maxqueue is one of them. In your case, to force the persistence to break after let’s say 10 connections are queued due to the server unable or too slow to process incoming requests, you could set maxqueue to 10 (instead of implicit/default value which is 0 and means unlimited):

backend myback
   server     srv1   srvname:80 maxqueue 10

Hello again @adarragon,

Yes, my configuration file contains a few lines like this (IP and server name changed for security reasons :slight_smile: ):

server Termserv1 192.168.1.12:3389 maxconn 10 check inter 5s rise 2 fall 1

The inter 5s rise 2 fall 1 addition is part of my various tries to address the issue.

should I add it like so:

server Termserv1 192.168.1.12:3389 maxconn 10 maxqueue 10 check inter 5s rise 2 fall 1

Best regards.

Yes this is correct, you can try increasing or decreasing maxqueue (but keep it > 0) to check how it behaves in your case and if it helps breaking persistence quickly enough when the server becomes saturated :slight_smile: