Hello,
Let me just start by saying thank you to Willy and all contributors. Haproxy has been a kick-a$$ solution for all our clients over the years. However, we’ve recently engaged a new client where the PPS/QPS levels are beyond what we’ve encountered before and we want to be certain the build and configuration is in the sweet spot.
We’ve deployed v1.7.9, compiled locally with Lua support. We’re observing that soft IRQ’s are ~8% while the system is under roughly 30% load overall. We believe this to be fairly high and we’re a bit concerned as the system is only experiencing about 50% of the traffic we expect to see in the coming months when my client hits their high season for business which just happens to be 4th quarter every year. So we’re a little pressed for time on making sure our setup is just right for their situation as they’ve opted for Ha-proxy as opposed to hardware based NLB’s in their new datacenter deployment.
Any insights or suggestions on how we might improve this is greatly appreciated.
I’ve provided all the build and configuration information below.
----total-cpu-usage---- -dsk/total- -net/total- —load-avg— —system-- ------memory-usage----- ----system----
usr sys idl wai hiq siq| read writ| recv send| 1m 5m 15m | int csw | used buff cach free| time
1 2 96 0 0 1|2938B 14k| 0 0 |8.80 8.99 7.68| 29k 54k|2640M 90.1M 414M 28.3G|22-09 13:43:50
9 13 70 0 0 8| 0 0 | 71M 74M|8.80 8.99 7.68| 184k 289k|2641M 90.1M 414M 28.3G|22-09 13:43:51
8 14 69 0 0 9| 0 0 | 72M 75M|8.80 8.99 7.68| 185k 295k|2641M 90.1M 414M 28.3G|22-09 13:43:52
9 13 71 0 0 8| 0 0 | 72M 74M|8.80 8.99 7.68| 185k 289k|2639M 90.1M 414M 28.3G|22-09 13:43:53
9 12 71 0 0 8| 0 88k| 71M 74M|8.80 8.99 7.68| 184k 298k|2638M 90.1M 414M 28.3G|22-09 13:43:54
9 13 70 0 0 8| 0 0 | 71M 73M|8.65 8.96 7.67| 183k 286k|2639M 90.1M 414M 28.3G|22-09 13:43:55
9 13 70 0 0 8| 0 0 | 70M 73M|8.65 8.96 7.67| 183k 295k|2639M 90.1M 414M 28.3G|22-09 13:43:56
9 13 71 0 0 8| 0 0 | 70M 74M|8.65 8.96 7.67| 185k 295k|2637M 90.1M 414M 28.3G|22-09 13:43:57
8 13 71 0 0 8| 0 4096B| 71M 74M|8.65 8.96 7.67| 182k 286k|2639M 90.1M 414M 28.3G|22-09 13:43:58
9 13 70 0 0 8| 0 0 | 71M 74M|8.65 8.96 7.67| 187k 298k|2639M 90.1M 414M 28.3G|22-09 13:43:59
9 15 68 0 0 8| 0 28k| 73M 76M|8.52 8.93 7.67| 189k 310k|2637M 90.1M 414M 28.3G|22-09 13:44:00^C
Platform:
OS/Kernel: Debian GNU/Linux 8 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u3 (2017-08-15)
CPU:
model name : Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
Architecture: x86_64
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
BogoMIPS: 4594.12
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720KMEMORY (MB):
total used free shared buffers cached
Mem: 32132 2046 30085 28 86 377
Swap: 11583 0 11583
Build Parameters:
HA-Proxy version 1.7.9 2017/08/18
Copyright 2000-2017 Willy Tarreau willy@haproxy.orgBuild options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -DTCP_USER_TIMEOUT=18
OPTIONS = USE_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.8
Compression algorithms supported : identity(“identity”), deflate(“deflate”), raw-deflate(“deflate”), gzip(“gzip”)
Built with OpenSSL version : OpenSSL 1.0.2l 25 May 2017
Running on OpenSSL version : OpenSSL 1.0.2l 25 May 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
Running on PCRE version : 8.35 2014-04-04
PCRE library supports JIT : yes
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBINDAvailable polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe
TCP Stack tweaks:
net.core.rmem_max = 8738000
net.core.somaxconn = 4000
net.ipv4.tcp_max_syn_backlog = 10050
vm.min_free_kbytes = 131072
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.tcp_rmem = 8192 873800 8738000
net.ipv4.tcp_max_orphans = 1048576
net.ipv4.tcp_wmem = 4096 655360 6553600
net.core.wmem_max = 6553600
net.core.netdev_max_backlog = 4050
fs.nr_open = 10000000
net.ipv4.tcp_mem = 3093984 4125312 6187968
net.ipv4.netfilter.ip_conntrack_max = 20000000
net.ipv4.ip_local_port_range = 1024 65535
NIC Setup:
auto bond0
iface bond0 inet static
address xx.xxx.xx.81
netmask 255.255.255.0
gateway xx.xxx.xx.1
slaves eth0 eth1
bond_mode 802.3ad
bond_miimon 100
bond_downdelay 200
bond_updelay 200auto bond1
iface bond1 inet static
address xxx.xx.x.81
netmask 255.255.248.0
slaves eth2 eth3
bond_mode 802.3ad
bond_miimon 100
bond_downdelay 200
bond_updelay 200
Configuration:
https://pastebin.ca/3876372