Haproxy settings priority


#1

Hello!

I have such backend section in my config:

backend app-servers
        mode tcp
        balance roundrobin
        stick-table type ip size 900k expire 30m
        stick on src
        option tcp-check
        maxconn 1300
        server app-01 172.1.2.3:443 check port 443
        server app-02 172.1.2.4:443 check port 443
        server app-03 172.1.2.5:443 check port 443
        server app-04 172.1.2.6:443 check port 443

Please help me understand such “what if” scenario:
Client’s IP address is in stick table and he wants to establish another session from the same IP address but his “working” server (e.g. app-01) is full (number of connections = 1300) - this session will:

  1. Drop
  2. Hang
  3. Haproxy put it to another server (e.g. app-02 or app-03) which number of current connections is lower?

And the second question:

is this config:

backend app-servers
            mode tcp
            balance roundrobin
            stick-table type ip size 900k expire 30m
            stick on src
            option tcp-check
            maxconn 1300
            server app-01 172.1.2.3:443 check port 443
            server app-02 172.1.2.4:443 check port 443
            server app-03 172.1.2.5:443 check port 443
            server app-04 172.1.2.6:443 check port 443

equal to this?:

backend app-servers
            mode tcp
            balance roundrobin
            stick-table type ip size 900k expire 30m
            stick on src
            option tcp-check
            server app-01 172.1.2.3:443 check port 443 maxconn 1300
            server app-02 172.1.2.4:443 check port 443 maxconn 1300
            server app-03 172.1.2.5:443 check port 443 maxconn 1300
            server app-04 172.1.2.6:443 check port 443 maxconn 1300

Haproxy version is 1.5.18


#2

Hi,

In order to answer your questions, we first need to understand where and how maxconn property works.

Basically, maxconn property can be set in 3 different locations in HAProxy configuration (/etc/haproxy.cfg) file :
1. maxconn in global section
2. maxconn in listen section
3. server maxconn

Now, coming to the how part:

  1. maxconn in global section sets the maximum number of concurrent connections allowed per process (by default HAProxy runs with a single process. Multiple processes can be triggered by configuring nbproc property). Once this limit is reached, HAProxy stops accepting new connections and these requests are queued in socket queue in the kernel, waiting for the connections to become available.

  2. maxconn in listen section sets the maximum number of concurrent connections allowed per listener. The default value for this maxconn is smaller than the global maxconn. In general, the listener maxconn comes into picture when there are multiple services handled by an haproxy instance. In such cases, the listener maxconn has to be configured in such a manner that a single service does not consume all the connections (defined by global maxconn) and the other service therefore stops working.

  3. server maxconn sets the maximum number of concurrent connections allowed on a server. If this limit is reached, then the fate of the request depends on persistence. If the request has a cookie associated with it, it is then pooled in server queue to wait for an available connection on the server. If the request does not have a cookie associated, in that case it is forwarded to a server with lesser number of connections. Please note, the request stays in the queue for a period specified by timeout queue property. If the request stays in the queue longer than the timeout queue specified, then the request is dropped and a 503 page is returned to the user.

Now, to answer your first question:

Considering the ambiguity in the question, i have tried answering it based on below assumptions:
CASE 1:

Assumption:
global maxconn =1300
server app-01, maxconn=1300
server app-02, maxconn=1300

Scenario:
If the number of connections on server app-01 reach 1300 and a user makes a new request but from the same IP specified in stick table.

HAProxy would stop accepting new connections and the request would be queued in socket queue in the kernel.

CASE 2:

Assumption:
global maxconn = 4000
server app-01, maxconn=1300
server app-02, maxconn=1300

Scenario:
If the number of connections on server app-01 reach 1300 and a user makes a new request but from the same IP specified in stick table.

The request would be accepted by HAProxy but would be queued in the server queue for a period specified by the the queue timeout property.

Coming to your second question,

No, both the configurations are not the same.

In the first configuration,

the global maxconn is set to 1300 and there is no maxconn limit set for the backend servers. Therefore, in this case HAProxy will accept a maximum of 1300 concurrent connections, post which the new request would be queued in socket queue in kernel to wait for available connections.

Whereas in second configuration,

there is no value set for global maxconn property, which will therefore default to the value set in DEFAULT_MAXCONN at build time. The per server maxconn limit is however set at 1300 and if this limit is reached on a server, then request is either forwarded to a server with lesser number of connections or pooled in server queue for the server to become available depending on persistence.

Hope this is helpful !


HAProxy throttling concurrent connections from different src ips
#3

At first, thank you for very nice detailed answer - now I understand how maxconn parameter works.

I have maxconn 10000 in my defaults section so this case is mine :grinning:

Can I change behavior by editing config in such manner:

When the number of connections on server app-01 reach 1300 and a user makes a new request but from the same IP specified in stick table, haproxy redirect this request to another server despite of the stick table?


#4

Hi AleksASB,

I am glad the answer was helpful to you ! :slightly_smiling_face:

To answer your question:

Yes, there are a couple of ways to implement the scenario described by you. However, i am skeptical of the need of such a requirement. Reason being, the server persistence. While implementing server persistence, it is quite useful and logical to queue requests in a server specific queue to minimize session state migration between peer servers. Not using queues would cause the server persistence to break in scenarios when the maxconn limit is reached intermittently and is short lived owing to various reasons such as temporary latency in network or latency in database.
In addition to this, monitoring queue related statistics such as queue and avg_queue is help in determining any deterioration in server performance and/or any surge in traffic. This info can thereby be used to forward next subsequent requests to a better performing server farm using ACLs.

Nevertheless, you may still fulfill your requirements using below approaches:

  1. Using first algorithm as the load-balancing algorithm. In this algorithm, the request is distributed based on server identifier “ID”, starting from lowest to the highest. Once the maxconn limit on a server is reached, the next subsequent request is forwarded to the first server with available connection slot, irrespective of the entry in stick table.

  2. Using the maxqueue property to set a limit to the size of server queue to a bare-minimum value of 1 request. This means if the maxconn limit for the server is reached, then only 1 additional request would be queued and all the other subsequent requests would be forwarded to a server with lesser number of connections. Please note that you cannot set the value for maxqueue property to 0 as this is the default value and setting it to 0 means the queue size is unlimited.

backend app-servers

mode tcp
balance roundrobin
stick-table type ip size 900k expire 30m
stick on src
option tcp-check
server app-01 172.1.2.3:443 check port 443 maxconn 1300 maxqueue 1
server app-02 172.1.2.4:443 check port 443 maxconn 1300 maxqueue 1
server app-03 172.1.2.5:443 check port 443 maxconn 1300 maxqueue 1

  1. Using srv_conn property to set a stick table matching condition in the backend section. The limitation here would be that you would have to write individual ACLs for all the servers in the backend.

backend app-servers

mode tcp
balance roundrobin
stick-table type ip size 900k expire 30m
acl sr_true srv_conn(app-servers/app-01) gt 1300
acl sr_true srv_conn(app-servers/app-02) gt 1300
acl sr_true srv_conn(app-servers/app-03) gt 1300
stick match src if !sr_true
option tcp-check
server app-01 172.1.2.3:443 check port 443 maxconn 1300
server app-02 172.1.2.4:443 check port 443 maxconn 1300
server app-03 172.1.2.5:443 check port 443 maxconn 1300

Hope this is helpful !