Too many connections to backend


We want to achieve several 1000 concurrent connections to a webservice, running on Apache Tomcat.
The requests are very short, but there are really many of them.
We are still failing, because connections to backend are always (at least very often) closed and must be reopened.
Therefor connections on the backend servers are opened and closed too often and they run out of ports and cannot handle the amount of requests.
We hoped, the haproxy option “http-reuse always” would solve the problem. But it does not, because a connection to a backend server will be closed,
when the client disconnects, who initially opened the connection.
We already tuned the OS according to
we run haproxy 1.8.13 and apache tomcat 7.0.90, both on linux (ubuntu 16.04)

What else can we do?

here is our (simpilifed) haproxy.cfg:

    log /dev/log local0 notice

    tune.ssl.cachesize          1000000
    tune.ssl.default-dh-param   2048
    ssl-default-bind-options    no-sslv3 no-tls-tickets # force-tlsv12

    maxconn 1000000


# Enable the statistics page
listen haproxy-stats-process-1
    bind *:9001
    stats enable
    mode http
    stats realm Haproxy\ Statistics
    stats uri /
    timeout client 60m
    timeout connect 60m
    timeout server 60m

    mode http

    option httplog
    option dontlognull
    option logasap
    option log-separate-errors
    option log-health-checks
    option dontlog-normal

    option prefer-last-server
    option http-keep-alive
    timeout http-keep-alive 120000
    no option httpclose
    no option http-server-close
    no option forceclose

    http-reuse always
    timeout check 15000
    default-server inter 1s fall 2 rise 2

    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend default_http
    log    global
    mode   http

    bind ipv4@*:80

    default_backend  pool_default_http

    maxconn 500000 # per process!!
    timeout client 30m
    timeout connect 60m
    timeout server 60m

backend pool_default_http
    log     global
    mode    http
    balance static-rr
    hash-type consistent

    option httpchk GET / HTTP/1.1\r\nHost:\ www
    http-check expect status 200
    default-server inter 1s fall 2 rise 2
    http-reuse always
    timeout check 15000

    server test_1  maxconn 20 weight 1 check
    server test_2  maxconn 20 weight 1 check
    server test_3  maxconn 20 weight 1 check
    server test_4  maxconn 20 weight 1 check

    timeout client 30m
    timeout connect 1m
    timeout server 60m


How do you know that you are running out of source ports? Does haproxy say that in the logs?

I don’t see how you would reach 1000 concurrent connections given that you have 4 backend servers with 20 maxconn each, that’s 4x 20 = 80 concurrent connections.

Based on the configuration I’d say that you are not running out of source ports, but that haproxy is queuing because it reached the 20 concurrent connections per server.

I’d have to bump per-server maxconn to at least 250 to get to a theoretical 1000 concurrent connections (when the request are perfectly balaned, but you may want to go further).


HAProxy should not queue, but reuse connections, so we have a huge number of connections between client and haproxy and only a few ones between haproxy and backend-servers. requests last on backond less than 1ms, so this should be possible.
How can we manage this?


Haproxy does not support connection pooling (yet), which would be required for what you expect. Therefor, one connection on the frontend equals one connection to the backend.

That is why the aggregated maxconn values of the servers need to be at a similar level where you expect the actual frontend concurrent request numbers to be.