HAProxy community

30% overhead using HAProxy?



I’m using HAProxy to front a simple node application, using the following acl/backend configuration:

acl acl_myapp hdr(host) -i myapp.service.consul

use_backend backend_myapp if acl_myapp

backend backend_myapp
    balance roundrobin
    option  httpchk GET /health
    server maxconn 128  weight 100  check

I am running an ApacheBench test against the application to test the max requests/sec I can achieve with my app. I am running ab with 300 concurrent requests, and a total of 6000 requests.

When I run ab against my application directly, on port 31005, for example:

ab -n 6000 -c 300 -k ""

I see numbers like

Requests per second:    1053.19 [#/sec] (mean)
Requests per second:    1104.31 [#/sec] (mean)
Requests per second:    1069.48 [#/sec] (mean)

However, when I go via HAProxy via setting host header, i.e.

ab -H "Host: myapp.service.consul" -n 6000 -c 300 -k ""

I see numbers like

Requests per second:    699.89 [#/sec] (mean)
Requests per second:    752.43 [#/sec] (mean)
Requests per second:    747.10 [#/sec] (mean)

Which is around 300 req/sec worse than when going direct to the application.

When I look at the statistics using maxconn of 128, I see:

Queue time:  269 ms
Response time:  198 ms
Total time:  467 ms

If I change it to something really high, like 1000, I see:

Queue time:  0 ms
Response time:  418 ms
Total time:  418 ms

And if change the maxconn to something low like 30, I see:

Queue time:  311 ms
Response time:  41 ms
Total time:  386 ms

Any ideas why I’m seeing such a difference when going through HAProxy?


Of course everything will slow down if you hit maxconn, that’s the point of queuing. Configure maxconn based on your backend’s capability, but if you tell haproxy “my backend cannot handle more than 128 connections”, than haproxy will respect that and queue the connections so that your backend is not overloaded.

From your description it isn’t clear if with higher maxconn configuration you reach your performance capabilities of your backend or not; however you configuration and keepalive modes are also unkown.

Please provide the full configuration and the full output of haproxy -vv if you still don’t reach the performance you are looking for.


Hi @lukastribus

I was never able to achieve the same throughput (~1000 req/sec) via HAProxy, as I was when I went to my application directly - even with large maxconn values.

However, I was able to achieve the same throughput via another load balancer, so I know it’s possible.

To answer your question, here is the full config from my setup:

    maxconn 16384
    log local0
    log local1 notice

    mode http
    log global
    unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
    unique-id-header X-Unique-ID
    timeout connect   5s
    timeout client   60s
    timeout server   60s
    timeout tunnel 3600s
    option dontlognull
    option http-server-close
    option redispatch
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

listen stats
    mode http
    stats enable
    stats hide-version
    stats realm Haproxy\ Statistics
    stats uri /

frontend http-in
    option httplog
    option forwardfor
    log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r\ %ID

acl acl_myapp hdr(host) -i myapp.service.consul

use_backend backend_myapp if acl_myapp

backend backend_myapp
    balance roundrobin
    option  httpchk GET /health
    server maxconn 128  weight 100  check

Thanks in advance for the help!


Still need the full output of “haproxy -vv”.


Ah yes, sorry -

HA-Proxy version 1.6.3 2015/12/25
Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2g-fips  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.38 2015-11-23
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


Ok, in the default section remove:

option http-server-close

and add:

option http-keep-alive
http-reuse safe (or “aggressive” or “always”)

Also see:


Thank you @lukastribus - that made a massive improvement. Seeing numbers very close to what I’m seeing when going to the app directly. Cheers!