HAProxy limiting throughput


#1

To the good people of the HAProxy forum,
I seem to have a small issue with my HAProxy configuration. For whatever reason I find HAProxy being the cause of a bottleneck in my ingest estate. My setup is as follows: I have 3 frontend servers passing data to the proxy server that’s then distributed (roundrobin) to 6 backend servers. If I remove the proxy from the estate and have the 3 frontend feed directly to the 6 backend we get full throughput. Feed data through the
proxy and data throughput drops by near half. My HAProxy config is shown below, any advice regarding optimising and best practices will be gratefully received.

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL).
        ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 9000
        timeout client  90000
        timeout server  90000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

frontend frontendserv
        bind *:8990
        mode http
        default_backend serverback

backend serverback
        balance roundrobin
        mode http
        server *server name1* *IP:PORT* check
	server *server name2* *IP:PORT* check
        server *server name3* *IP:PORT* check
        server *server name4* *IP:PORT* check
        server *server name5* *IP:PORT* check
        server *server name6* *IP:PORT* check
                
listen stats
        bind :1936       #Listen on all IP's on port 1936
        mode http
        balance
        stats refresh 5s
        #This is the virtual URL to access the stats page
        stats uri /haproxy_stats

        #Authentication realm. This can be set to anything. Escape space characters with a backslash.
        stats realm HAProxy\ Statistics

Many thanks,

Tom


#2

What are those numbers and how are they measured exactly? Also provide the output of haproxy -vv, uname -a and take a look at both system and userspace CPU load while haproxy is under load.


#3

Hi lukastribus,
I am measuring those numbers using SNMP polling of the backend servers. The output of haproxy -vv is as follows:

HA-Proxy version 1.7.11-1ppa1~trusty 2018/04/30
Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
Running on PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (libpcre build without JIT?)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with network namespace support

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [COMP] compression
        [TRACE] trace
        [SPOE] spoe

The CPU load of the server is around 20% utilisation. I am hoping to make haproxy use TCP mode instead of HTTP as we don’t require header features. Would this make a difference to throughput?


#4

Hi,

How many concurrent connections are you opening for your tests?

Baptiste


#5

Hi Baptiste,

I attach some screenshots of my HAProxy stats page, I’m hoping this might shed some light on situation.

process


#6

Explain the numbers, like I asked earlier:

Do you have 50 Mbit/s instead of 500mbit/s? 1G instead of 10G? 20Gbit/s instead of 40Gbit/s?

How exactly does your configuration looks like? Is haproxy on a VM? Which ethernet NIC’s are used and do you use a single NIC for both frontend and backend traffic? What’s the actual hardware?


#7

Apologies about not making the situation clearer. Below are some screenshots of the interface traffic (bps) for one of the backend servers. This shows the difference of traffic between going through HAProxy and feeding directly from the frontend server to the backend circumventing HAProxy.

bps2bps1

We see a definite drop in traffic. HAProxy is currently on a ESXi hosted VM and yes there is only one virtual NIC:

eth0


#8

Well, with haproxy you are load-balancing between 6 backeneds and without haproxy you are just using a single backend.

Do you have an actual confirmed problem or are you just concerned about the drop in your graph of a single backend server?


#9

Ok, I’ll try and explain this a different way. We have the ability to cache any data that can’t be passed to the backend servers on the frontend servers. Acting like a queuing system. When we bypass the proxy all 3 frontend servers pass data to 6 backend servers in real time and all data is transmitted. When we add the HAProxy back into the situation we find caching and queuing of messages/data in the frontend servers giving the impression that the proxy can’t cope with the influx of data sent to it. Regardless of whether it’s balanced between 6 servers we still find the proxy can’t pass all data in time and we get a backlog.