Vertical scaling of HAProxy instances

Hello,

We are trying to vertically scale our HAProxy instances, and we are not getting the results that one would expect by upgrading the hardware (assuming that the software can take advantage of the extra resources).

We upgraded from machines with 16 threads, to machines with 32 threads, and we are only observing a 50% increase in the ability to sustain connections and rps, as well as ssl rate, and we can’t seem to reach that rate before we overload the server.

Custom kernel parameters
net.ipv4.ip_local_port_range = "12768    60999"
net.nf_conntrack_max = 5000000
fs.nr_open = 5000000
Output from `haproxy -vv`
HAProxy version 2.6.6-274d1a4 2022/09/22 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2027.
Known bugs: http://www.haproxy.org/bugs/bugs-2.6.6.html
Running on: Linux 5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_PROMEX=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : +EPOLL -KQUEUE +NETFILTER +PCRE -PCRE_JIT -PCRE2 -PCRE2_JIT +POLL +THREAD +BACKTRACE -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE +GETADDRINFO +OPENSSL -LUA +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT -QUIC +PROMEX -MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=32).
Built with OpenSSL version : OpenSSL 3.0.7 1 Nov 2022
Running on OpenSSL version : OpenSSL 3.0.7 1 Nov 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with the Prometheus exporter as a service
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.3.0

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
         h2 : mode=HTTP  side=FE|BE  mux=H2    flags=HTX|HOL_RISK|NO_UPG
       fcgi : mode=HTTP  side=BE     mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
  <default> : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
         h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
  <default> : mode=TCP   side=FE|BE  mux=PASS  flags=
       none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : prometheus-exporter
Available filters :
	[CACHE] cache
	[COMP] compression
	[FCGI] fcgi-app
	[SPOE] spoe
	[TRACE] trace
HAProxy config
global
    log /dev/log len 65535 local0 warning
    chroot /var/lib/haproxy
    stats socket /run/haproxy-admin.sock mode 660 level admin
    user haproxy
    group haproxy
    daemon
    maxconn 2000000
    maxconnrate 5000
    maxsslrate 5000

defaults
    log     global
    option  dontlognull
    timeout connect 10s
    timeout client  120s
    timeout server  120s

frontend stats
    mode http
    bind *:8404
    http-request use-service prometheus-exporter if { path /metrics }
    stats enable
    stats uri /stats
    stats refresh 10s

frontend k8s-api
    bind *:6443
    mode tcp
    option tcplog
    timeout client 300s
    default_backend k8s-api

backend k8s-api
    mode tcp
    option tcp-check
    timeout server 300s
    balance leastconn
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 500 maxqueue 256 weight 100
    server master01 x.x.x.x:6443 check
    server master02 x.x.x.x:6443 check
    server master03 x.x.x.x:6443 check
    retries 0

frontend k8s-server
    bind *:80
    mode http
    http-request add-header X-Forwarded-Proto http
    http-request add-header X-Forwarded-Port 80
    default_backend k8s-server

backend k8s-server
    mode http
    balance leastconn
    option forwardfor
    default-server inter 10s downinter 5s rise 2 fall 2 check
    server worker01a x.x.x.x:31551 maxconn 200000
    server worker02a x.x.x.x:31551 maxconn 200000
    server worker03a x.x.x.x:31551 maxconn 200000
    server worker04a x.x.x.x:31551 maxconn 200000
    server worker05a x.x.x.x:31551 maxconn 200000
    server worker06a x.x.x.x:31551 maxconn 200000
    server worker07a x.x.x.x:31551 maxconn 200000
    server worker08a x.x.x.x:31551 maxconn 200000
    server worker09a x.x.x.x:31551 maxconn 200000
    server worker10a x.x.x.x:31551 maxconn 200000
    server worker11a x.x.x.x:31551 maxconn 200000
    server worker12a x.x.x.x:31551 maxconn 200000
    server worker13a x.x.x.x:31551 maxconn 200000
    server worker14a x.x.x.x:31551 maxconn 200000
    server worker15a x.x.x.x:31551 maxconn 200000
    server worker16a x.x.x.x:31551 maxconn 200000
    server worker17a x.x.x.x:31551 maxconn 200000
    server worker18a x.x.x.x:31551 maxconn 200000
    server worker19a x.x.x.x:31551 maxconn 200000
    server worker20a x.x.x.x:31551 maxconn 200000
    server worker01an x.x.x.x:31551 maxconn 200000
    server worker02an x.x.x.x:31551 maxconn 200000
    server worker03an x.x.x.x:31551 maxconn 200000
    retries 0

frontend k8s-server-https
    bind *:443 ssl crt /etc/haproxy/certs/
    mode http
    http-request add-header X-Forwarded-Proto https
    http-request add-header X-Forwarded-Port 443
    http-request del-header X-SERVER-SNI
    http-request set-header X-SERVER-SNI %[ssl_fc_sni] if { ssl_fc_sni -m found }
    http-request set-var(txn.fc_sni) hdr(X-SERVER-SNI) if { hdr(X-SERVER-SNI) -m found }
    http-request del-header X-SERVER-SNI
    default_backend k8s-server-https

backend k8s-server-https
    mode http
    balance leastconn
    option forwardfor
    default-server inter 10s downinter 5s rise 2 fall 2  check no-check-ssl
    server worker01a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker02a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker03a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker04a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker05a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker06a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker07a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker08a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker09a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker10a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker11a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker12a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker13a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker14a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker15a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker16a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker17a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker18a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker19a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker20a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker01an x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker02an x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    server worker03an x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt sni var(txn.fc_sni) maxconn 200000
    retries 0

frontend k8s-nfs-monitor
    bind *:8080
    mode http
    monitor-uri /health_nfs_cluster
    acl k8s_server_down nbsrv(k8s-server) le 2
    acl nfs_down nbsrv(nfs) lt 1
    monitor fail if nfs_down || k8s_server_down

backend nfs
    mode tcp
    default-server inter 5s downinter 2s rise 1 fall 2
        server nfs01 x.x.x.x:2049 check

frontend k8s-cluster-monitor
    bind *:8081
    mode http
    monitor-uri /health_cluster
    acl k8s_server_down nbsrv(k8s-server) le 2
    monitor fail if k8s_server_down

I’ve recently posted about “Theoretical limits for a HAProxy instance”, where I used the "Small" server as an example for the limits we were observing. I am using the same metrics here.

"Small" server specs

Hosted on bare metal

CPU: AMD Ryzen 7 3700X 8-Core Processor (16 threads)
RAM: DDR4 64GB (2666 MT/s)
"Small" server Prometheus metrics

haproxy_process_current_connections

rate(haproxy_process_requests_total[2m])

haproxy_process_idle_time_percent

haproxy_process_current_ssl_rate

haproxy_process_current_connection_rate

node_load1

node_nf_conntrack_entries

This is the same test, in a bigger server with production traffic, but raising the maxsslrate and maxconnrate, from 2500 to 5000.

"Big" server specs

Hosted on bare metal

CPU: AMD Ryzen 9 5950X 16-Core Processor (32 threads)
RAM: DDR4 128GB (2666 MT/s)
"Big" server Prometheus metrics

haproxy_process_current_connections

rate(haproxy_process_requests_total[2m])

haproxy_process_idle_time_percent

haproxy_process_current_ssl_rate

haproxy_process_current_connection_rate

node_load1

node_nf_conntrack_entries

We are wondering if:

  1. Are these results expected?
  2. Does anyone with a similar setup/config get different results?

I strongly suggest to bring both of your inquires to the haproxy mailing list. You are way more likely to get expert answers on this there.

1 Like

After downgrading to OpenSSL 1.1.1s (from 3.0.7), seems like our issues have been resolved, and HAProxy scales as expected.

1 Like