HAProxy stops serving frontend requests while not closing backend connections at 100% cpu utilisation

Hello everyone

Lately we had a problem which caused our both HAProxies to consume 100% cpu time and stop responding to new frontend connections. While this problem occurs the HAProxy seems to keep all existing backend connections and not closing them over a longer period of time (this information is based one the grafana graphs). The haproxy stops creating log entries and there are no error or warning messages in other logs.

Reloading the haproxy service in this situation allows new frontend connections to the haproxy for a short period of time and our site is served until the first described situation occurs again (30-60min). Restarting the haproxy service solves the problem.

Our haproxies running the same setup:

  • Centos 7 virtual machine on VMWare ESX
  • 4 CPUs and 4GB RAM
  • Kernel 3.10.0-862.3.2.el7.x86_64 #1 SMP
  • HAProxy Version 1.8.14-52e4d43 (also occurred with HAProxy 1.8.9)

Both systems perform ssl offloading and load balance traffic to four backend http server which are on the same network.

Our normal load behaviour on a Saturday looks like this. The archived data is only in 30min intervals, therefore the cpu utilisation is not accurate with 20%.

When the described problem occurs our monitoring records the following data. Both HAProxies stop working at the exact same moment. Therefore, I think that the HAProxy is not the cause of the problem. But the effects lead to a problem on both systems. Also after the peak around 20:30 (08:30 pm) our layer 4 load balancer records a decreasing amount of connections as expected, but the HAProxies keep a high number of connections.


In total this problem occurred three times in the last 1 1/2 month only on a Saturday while higher load situations. Before the first occurrence this setup was running for about 2 1/2 month without any problems.

Unfortunately I’m unable to reproduce this behaviour.

The haproxy.cfg has the following content. I remove names and acls.

global
	maxconn 40000
        nbproc 1
        nbthread 4
        cpu-map auto:1/1-4 0-3

	log 127.0.0.1 local0 notice
	log 127.0.0.1 local0 info #temp

	chroot /var/lib/haproxy
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	tune.ssl.default-dh-param 2048
	ssl-default-bind-options no-sslv3
        ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK

defaults
	maxconn 40000
	log global
	mode http
	option httplog
	option dontlognull
	timeout http-request 10s
	timeout http-keep-alive 3s
	timeout connect 12s
	timeout queue 60s
	timeout client 60s
	timeout server 60s
	timeout	check 30s

userlist ...

frontend stats_frontend
	bind *:8080
	mode http
	option dontlog-normal
	default_backend stats_backend

frontend http_frontend
	bind *:80

        option http-buffer-request
        declare capture request len 20000
        http-request capture req.body id 0
	capture request header Host len 200
	log-format "%ci:%cp [%tr] %ft %b/%s %Th/%Ti/%TR/%Tq/%Tw/%Tc/%Tr/%Ta/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

	acl is_crossdomain path -i /crossdomain.xml
	acl is_well_known path_beg -i /.well-known
	acl is_...

	# Redirect acls for old urls
        http-request redirect code 301 location #... (about 30 redirect rules)
	
        redirect scheme https if ...

	use_backend haproxy_backend if is_well_known
	use_backend http_server_backend if ...
	
	# Default backend
	default_backend http_server_backend

frontend https_frontend
	bind *:443 ssl alpn h2,http/1.1 crt ...

        option http-buffer-request
        declare capture request len 20000
        capture request header Host len 200
	log-format "%ci:%cp [%tr] %ft %b/%s %Th/%Ti/%TR/%Tq/%Tw/%Tc/%Tr/%Ta/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

	acl is_crossdomain path -i /crossdomain.xml
	acl is_well_known path_beg -i /.well-known
	acl ...

	use_backend haproxy_backend if is_well_known
	use_backend http_server_backend if ...

	# Default backend
	default_backend http_server_backend

backend http_server_backend
	balance leastconn
	option http-keep-alive
	option forwardfor
	option httpchk HEAD / HTTP/1.1\r\nHost:\ ...

	acl is_crossdomain capture.req.uri -m str /crossdomain.xml
	acl ...

	http-request deny if ...

	# Request authorization for sites
	http-request auth realm ...

	http-request set-header ...

	# Rewrite request urls 
	reqirep ^([^\ :]*)\ ...

	# default-server changes the default settings for backend servers
        default-server inter 2s downinter 5s rise 3 fall 2
	server httpserver1 10.x.y.z1:30080 check
	server httpserver2 10.x.y.z2:30080 check
	server httpserver3 10.x.y.z3:30080 check
	server httpserver4 10.x.y.z4:30080 check

backend stats_backend
	mode http
	acl is_auth ...
	acl is_admin ...
	stats ...

Does anyone know a similar situation or has an idea what can cause or solve this problem?

Thanks for your help.

1 Like

That’s a though one to troubleshoot.

A few things:

  • can you provide the output of haproxy -vv
  • can you confirm the CPU usage is spent in userspace and haproxy (not in the kernel)?
  • install strace on both haproxy instances and when in this situation (high CPU, no responses), trace the syscalls that haproxy makes with strace -ttfp <haproxy-PID>. The output may contain confidential data like ip addresses or even parts of the HTTP transaction - but when haproxy is spinning in a busy loop, that likely isn’t even the case. Anyway do consider confidentiality when providing strace output.

Hi lukastribus,

thanks for your reply.

The output of haproxy -vv:

HA-Proxy version 1.8.14-52e4d43 2018/09/20
Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow -Wno-unused-label
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace

Regarding your second question I can’t confirm at the moment that the CPU usage is spent in userspace and haproxy. The processes where restarted by a colleague before I was able to login to the system. I have to wait for the next occurrence of this problem.

I’ve installed strace on both haproxy.

haproxy -vv output is fine.

Other than stracing the process like I told above, I figure that if you use nbproc instead of nbthread, the situation would probably be less critical (a process may spin on 100% but would not take down the entire instance). This would also make troubleshooting easier.

It sounds like a haproxy bug and it could be related to the nbthread feature itself, also.

So if you need a immediate workaround, disabling nbthread could help. Just know that nbproc has some disadvantages, especially regarding stats (which are per-process).

Hi lukastribus,

I checked our configuration history. The first time this problem occured the haproxy was running in single process / single thread mode. I will have a look at the multi processing configuration and may configure it on one node.

Hi lukastribus,

this weekend the described problem occurred again (on native hardware), after we had a higher load for our circumstances. The processes of the HAProxy did not respond to any requests this morning so I restarted the service after running strace and collected some data. The remaining two HAProxies where not affected (VMs). All three systems have the same configuration except for the cpu mapping because ndmzlb5 has 8 Cores instead of 4.

The load peak occurred on Friday at ~ 20:04. The HAProxy keeps a higher cpu utilization around 19% and stopped handling requests on Saturday around 23:53. From Saturday 23:53 until this morning there a no log entries of the HAProxy. The operating system itself and other processes were not affected.

CPU Load on affected HAProxy:

Load outgoing internet connection through HAProxy:

CPU Load on the remaining HAProxies:

Output of ps on ndmzlb5:

    Mo 11. Feb 07:58:02 CET 2019
 07:58:02 up 9 days, 21:56,  1 user,  load average: 8,01, 8,04, 8,05
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         2  0.0  0.0      0     0 ?        S    Feb01   0:04 [kthreadd]
root         3  0.0  0.0      0     0 ?        S    Feb01   0:40  \_ [ksoftirqd/0]
root         5  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/0:0H]
root         7  0.0  0.0      0     0 ?        S    Feb01   0:08  \_ [migration/0]
root         8  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [rcu_bh]
root         9  0.4  0.0      0     0 ?        S    Feb01  70:15  \_ [rcu_sched]
root        10  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [lru-add-drain]
root        11  0.0  0.0      0     0 ?        S    Feb01   0:07  \_ [watchdog/0]
root        12  0.0  0.0      0     0 ?        S    Feb01   0:02  \_ [watchdog/1]
root        13  0.0  0.0      0     0 ?        S    Feb01   0:06  \_ [migration/1]
root        14  0.0  0.0      0     0 ?        S    Feb01   0:38  \_ [ksoftirqd/1]
root        16  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/1:0H]
root        17  0.0  0.0      0     0 ?        S    Feb01   0:02  \_ [watchdog/2]
root        18  0.0  0.0      0     0 ?        S    Feb01   0:06  \_ [migration/2]
root        19  0.0  0.0      0     0 ?        S    Feb01   0:44  \_ [ksoftirqd/2]
root        21  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/2:0H]
root        22  0.0  0.0      0     0 ?        S    Feb01   0:02  \_ [watchdog/3]
root        23  0.0  0.0      0     0 ?        S    Feb01   0:06  \_ [migration/3]
root        24  0.0  0.0      0     0 ?        S    Feb01   0:42  \_ [ksoftirqd/3]
root        26  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/3:0H]
root        27  0.0  0.0      0     0 ?        S    Feb01   0:02  \_ [watchdog/4]
root        28  0.0  0.0      0     0 ?        S    Feb01   0:08  \_ [migration/4]
root        29  0.0  0.0      0     0 ?        S    Feb01   0:17  \_ [ksoftirqd/4]
root        31  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/4:0H]
root        32  0.0  0.0      0     0 ?        S    Feb01   0:02  \_ [watchdog/5]
root        33  0.0  0.0      0     0 ?        S    Feb01   0:04  \_ [migration/5]
root        34  0.0  0.0      0     0 ?        S    Feb01   0:19  \_ [ksoftirqd/5]
root        36  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/5:0H]
root        37  0.0  0.0      0     0 ?        S    Feb01   0:02  \_ [watchdog/6]
root        38  0.0  0.0      0     0 ?        S    Feb01   0:06  \_ [migration/6]
root        39  0.0  0.0      0     0 ?        S    Feb01   0:20  \_ [ksoftirqd/6]
root        41  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/6:0H]
root        42  0.0  0.0      0     0 ?        S    Feb01   0:02  \_ [watchdog/7]
root        43  0.0  0.0      0     0 ?        S    Feb01   0:08  \_ [migration/7]
root        44  0.0  0.0      0     0 ?        S    Feb01   0:44  \_ [ksoftirqd/7]
root        46  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/7:0H]
root        48  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [kdevtmpfs]
root        49  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [netns]
root        50  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [khungtaskd]
root        51  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [writeback]
root        52  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kintegrityd]
root        53  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [bioset]
root        54  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [bioset]
root        55  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [bioset]
root        56  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kblockd]
root        57  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [md]
root        58  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [edac-poller]
root        59  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [watchdogd]
root        72  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [kswapd0]
root        73  0.0  0.0      0     0 ?        SN   Feb01   0:00  \_ [ksmd]
root        74  0.0  0.0      0     0 ?        SN   Feb01   0:04  \_ [khugepaged]
root        75  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [crypto]
root        83  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kthrotld]
root        85  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kmpath_rdacd]
root        86  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kaluad]
root        87  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kpsmoused]
root        89  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [ipv6_addrconf]
root       102  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [deferwq]
root       137  0.0  0.0      0     0 ?        S    Feb01   0:20  \_ [kauditd]
root       948  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [scsi_eh_0]
root       963  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [scsi_tmf_0]
root      1169  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [ata_sff]
root      1866  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [scsi_eh_1]
root      1892  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [scsi_tmf_1]
root      1914  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [scsi_eh_2]
root      1929  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [scsi_tmf_2]
root      3867  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [ttm_swap]
root      4288  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kdmflush]
root      4291  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [bioset]
root      4300  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kdmflush]
root      4303  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [bioset]
root      4324  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [bioset]
root      4327  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfsalloc]
root      4329  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs_mru_cache]
root      4335  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-buf/dm-1]
root      4336  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-data/dm-1]
root      4337  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-conv/dm-1]
root      4338  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-cil/dm-1]
root      4339  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-reclaim/dm-]
root      4341  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-log/dm-1]
root      4342  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-eofblocks/d]
root      4347  0.0  0.0      0     0 ?        S    Feb01  12:12  \_ [xfsaild/dm-1]
root      4351  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/1:1H]
root      6280  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kdmflush]
root      6281  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/6:1H]
root      6290  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [bioset]
root      6334  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kvm-irqfd-clean]
root      6782  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-buf/sda1]
root      6792  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-data/sda1]
root      6805  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-conv/sda1]
root      6831  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-cil/sda1]
root      6832  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-reclaim/sda]
root      6833  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-log/sda1]
root      6854  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-eofblocks/s]
root      6869  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [xfsaild/sda1]
root      8447  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-buf/dm-2]
root      8448  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-data/dm-2]
root      8449  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-conv/dm-2]
root      8450  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-cil/dm-2]
root      8451  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-reclaim/dm-]
root      8452  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-log/dm-2]
root      8453  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [xfs-eofblocks/d]
root      8454  0.0  0.0      0     0 ?        S    Feb01   0:00  \_ [xfsaild/dm-2]
root      8458  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/5:1H]
root      8569  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/7:1H]
root      8652  0.0  0.0      0     0 ?        S<   Feb01   0:03  \_ [kworker/0:1H]
root      8851  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/2:1H]
root      9574  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/3:1H]
root     11155  0.0  0.0      0     0 ?        S<   Feb01   0:00  \_ [kworker/4:1H]
root     32140  0.0  0.0      0     0 ?        S    04:10   0:03  \_ [kworker/0:1]
root       340  0.0  0.0      0     0 ?        S    04:24   0:00  \_ [kworker/u64:2]
root      4521  0.0  0.0      0     0 ?        S    05:54   0:00  \_ [kworker/4:2]
root      7512  0.0  0.0      0     0 ?        S    06:58   0:00  \_ [kworker/2:0]
root      7777  0.0  0.0      0     0 ?        S    07:04   0:00  \_ [kworker/2:1]
root      8415  0.0  0.0      0     0 ?        S    07:18   0:00  \_ [kworker/5:1]
root      8535  0.0  0.0      0     0 ?        S    07:22   0:00  \_ [kworker/4:1]
root      8987  0.0  0.0      0     0 ?        S    07:30   0:00  \_ [kworker/3:0]
root      9007  0.0  0.0      0     0 ?        S    07:32   0:00  \_ [kworker/u64:1]
root      9405  0.0  0.0      0     0 ?        S    07:38   0:00  \_ [kworker/3:2]
root      9422  0.0  0.0      0     0 ?        S    07:39   0:00  \_ [kworker/1:0]
root      9423  0.0  0.0      0     0 ?        S    07:39   0:00  \_ [kworker/1:3]
root      9459  0.0  0.0      0     0 ?        S    07:40   0:00  \_ [kworker/6:0]
root      9687  0.0  0.0      0     0 ?        S    07:45   0:00  \_ [kworker/1:2]
root      9690  0.0  0.0      0     0 ?        S    07:45   0:00  \_ [kworker/6:2]
root      9709  0.0  0.0      0     0 ?        S    07:47   0:00  \_ [kworker/7:1]
root      9866  0.0  0.0      0     0 ?        S    07:48   0:00  \_ [kworker/5:2]
root      9919  0.0  0.0      0     0 ?        S    07:50   0:00  \_ [kworker/6:1]
root      9920  0.0  0.0      0     0 ?        S    07:50   0:00  \_ [kworker/0:2]
root     10061  0.0  0.0      0     0 ?        S    07:52   0:00  \_ [kworker/7:0]
root     10115  0.0  0.0      0     0 ?        S    07:54   0:00  \_ [kworker/2:2]
root     10148  0.0  0.0      0     0 ?        S    07:56   0:00  \_ [kworker/0:0]
root     10289  0.0  0.0      0     0 ?        S    07:57   0:00  \_ [kworker/1:1]
root         1  0.0  0.0 193908  6996 ?        Ss   Feb01   8:06 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
root      4420  0.0  0.0 114604 69104 ?        Ss   Feb01   2:57 /usr/lib/systemd/systemd-journald
root      4447  0.0  0.0 201080  8272 ?        Ss   Feb01   0:00 /usr/sbin/lvmetad -f
root      4451  0.0  0.0  47944  5356 ?        Ss   Feb01   0:00 /usr/lib/systemd/systemd-udevd
root      8482  0.0  0.0  62044  1260 ?        S<sl Feb01   0:37 /sbin/auditd
dbus      8504  0.0  0.0  66524  2648 ?        Ssl  Feb01   4:04 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root      8506  0.0  0.0  21664  1320 ?        Ss   Feb01   1:53 /usr/sbin/irqbalance --foreground
root      8509  0.0  0.0  26376  1780 ?        Ss   Feb01   2:07 /usr/lib/systemd/systemd-logind
polkitd   8511  0.0  0.0 612972 13088 ?        Ssl  Feb01   1:01 /usr/lib/polkit-1/polkitd --no-debug
avahi     8512  0.0  0.0  60184  2204 ?        Ss   Feb01   0:13 avahi-daemon: running [ndmzlb5.local]
avahi     8514  0.0  0.0  60060   392 ?        S    Feb01   0:00  \_ avahi-daemon: chroot helper
root      8519  0.0  0.0 126276  1688 ?        Ss   Feb01   0:08 /usr/sbin/crond -n
root      8560  0.0  0.0 110084   856 tty1     Ss+  Feb01   0:00 /sbin/agetty --noclear tty1 linux
root      8568  0.0  0.0 358632 29420 ?        Ssl  Feb01   0:23 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
root      8571  0.0  0.0 563576  9524 ?        Ssl  Feb01   1:06 /usr/sbin/NetworkManager --no-daemon
root      9053  0.0  0.0 573812 19168 ?        Ssl  Feb01   1:06 /usr/bin/python2 -Es /usr/sbin/tuned -l -P
root      9054  0.0  0.0 112756  4356 ?        Ss   Feb01   0:17 /usr/sbin/sshd -D
root     10291  2.0  0.0 154564  5672 ?        Ss   07:57   0:00  \_ sshd: root@pts/0
root     10294  0.1  0.0 115432  2104 pts/0    Ss   07:57   0:00      \_ -bash
root     10310  0.0  0.0 113172  1392 pts/0    S+   07:58   0:00          \_ /bin/bash /usr/sbin/collectData
root     10313  0.0  0.0 155488  2012 pts/0    R+   07:58   0:00              \_ ps axuf
root      9058  0.4  0.0 434312 38068 ?        Ssl  Feb01  70:41 /usr/sbin/rsyslogd -n
www       9060  0.0  0.0  19856  1420 ?        Ss   Feb01   0:26 /usr/local/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf
root      9512  0.0  0.0  89544  2168 ?        Ss   Feb01   0:14 /usr/libexec/postfix/master -w
postfix   9550  0.0  0.0  89716  4104 ?        S    Feb01   0:03  \_ qmgr -l -t unix -u
postfix   7073  0.0  0.0  89648  4080 ?        S    06:49   0:00  \_ pickup -l -t unix -u
root     23922  0.0  0.0  82560  9500 ?        Ss   Feb08   0:00 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923  147  0.0 618196 67260 ?        Rsl  Feb08 5832:36  \_ /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
23923
UID        PID  SPID  PPID  C    SZ   RSS PSR STIME TTY      STAT   TIME CMD
haproxy  23923 23923 23922 13 154549 67260  0 Feb08 ?        Rsl  522:25 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923 23924 23922 14 154549 67260  1 Feb08 ?        Rsl  586:30 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923 23925 23922 12 154549 67260  2 Feb08 ?        Rsl  501:42 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923 23926 23922 12 154549 67260  3 Feb08 ?        Rsl  503:07 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923 23927 23922 12 154549 67260  4 Feb08 ?        Rsl  505:13 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923 23928 23922 54 154549 67260  5 Feb08 ?        Rsl  2163:31 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923 23929 23922 13 154549 67260  6 Feb08 ?        Rsl  529:43 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  23923 23930 23922 13 154549 67260  7 Feb08 ?        Rsl  520:21 /usr/local/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid

The connection table lists quite a lot connections having the status COLSE-WAIT (I replaced ip addresses with names and I removed lines otherwise the Post was to long. The table had about 350 Connection in State CLOSE-WAIT):

State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     2      128          *:80                       *:*                   users:(("haproxy",pid=23923,fd=7))
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d1)
LISTEN     129    128          *:8080                     *:*                   users:(("haproxy",pid=23923,fd=5))
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d0)
LISTEN     0      128          *:22                       *:*                   users:(("sshd",pid=9054,fd=3))
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d1)
LISTEN     0      128    127.0.0.1:3000                     *:*                   users:(("lighttpd",pid=9060,fd=3))
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d0)
LISTEN     0      100    127.0.0.1:25                       *:*                   users:(("master",pid=9512,fd=13))
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d0)
LISTEN     129    128          *:443                      *:*                   users:(("haproxy",pid=23923,fd=8)) timer:(keepalive,042ms,0)
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d1591)
SYN-RECV   0      0      ndmzlb5:443                external-gateway1:47189               timer:(on,14sec,4)
	
SYN-RECV   0      0      ndmzlb5:443                external-gateway2:45381               timer:(on,14sec,4)
	
SYN-RECV   0      0      ndmzlb5:443                external-gateway2:45261               timer:(on,21sec,5)
	
SYN-RECV   0      0      ndmzlb5:443                external-gateway1:47074               timer:(on,21sec,5)
	
CLOSE-WAIT 122    0      ndmzlb5:443                <external-user-ip>:59554               users:(("haproxy",pid=23923,fd=1315))
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d1332)
CLOSE-WAIT 1      0      ndmzlb5:58462              backend-server1:30080               users:(("haproxy",pid=23923,fd=1001))
	 skmem:(r768,rb367360,t0,tb87040,f3328,w0,o0,bl0,d0)
... [ REMOVED LINES DUE TO LENGTH ] ...
CLOSE-WAIT 180    0      ndmzlb5:443                <external-user-ip>:46412               users:(("haproxy",pid=23923,fd=145))
	 skmem:(r2304,rb369280,t0,tb92160,f1792,w0,o0,bl0,d821)
CLOSE-WAIT 46173  0      ndmzlb5:46730              backend-server2:30080               users:(("haproxy",pid=23923,fd=37))
	 skmem:(r78336,rb367360,t0,tb87040,f3584,w0,o0,bl0,d1)
CLOSE-WAIT 1      0      ndmzlb5:443                <external-user-ip>:56896               users:(("haproxy",pid=23923,fd=413))
	 skmem:(r768,rb369280,t0,tb46080,f3328,w0,o0,bl0,d824)
CLOSE-WAIT 135    0      ndmzlb5:8080               monitoring-system:49698              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d0)
CLOSE-WAIT 1859   0      ndmzlb5:443                <external-user-ip>:53581               users:(("haproxy",pid=23923,fd=896))
	 skmem:(r4608,rb369280,t0,tb165888,f3584,w0,o0,bl0,d821)
CLOSE-WAIT 518    0      ndmzlb5:443                external-gateway1:49977              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d1591)
CLOSE-WAIT 396    0      ndmzlb5:443                <external-user-ip>:33194               users:(("haproxy",pid=23923,fd=680))
	 skmem:(r2304,rb369280,t0,tb46080,f1792,w0,o0,bl0,d822)
CLOSE-WAIT 1      0      ndmzlb5:443                <external-user-ip>:37770               users:(("haproxy",pid=23923,fd=35))
	 skmem:(r768,rb369280,t0,tb46080,f3328,w0,o0,bl0,d1591)
CLOSE-WAIT 761    0      ndmzlb5:443                <external-user-ip>:52049               users:(("haproxy",pid=23923,fd=131))
	 skmem:(r4608,rb369280,t0,tb50688,f3584,w0,o0,bl0,d823)
CLOSE-WAIT 1      0      ndmzlb5:52984              backend-server3:30080               users:(("haproxy",pid=23923,fd=285))
	 skmem:(r768,rb367360,t0,tb87040,f3328,w0,o0,bl0,d0)
CLOSE-WAIT 518    0      ndmzlb5:443                external-gateway1:51424              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d1591)
CLOSE-WAIT 135    0      ndmzlb5:8080               monitoring-system:33088              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d0)
CLOSE-WAIT 1      0      ndmzlb5:37162              backend-server3:30080               users:(("haproxy",pid=23923,fd=60))
	 skmem:(r768,rb367360,t0,tb87040,f3328,w0,o0,bl0,d0)
CLOSE-WAIT 135    0      ndmzlb5:8080               monitoring-system:42794              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d0)
CLOSE-WAIT 413    0      ndmzlb5:443                <external-user-ip>:58701               users:(("haproxy",pid=23923,fd=1042))
	 skmem:(r2304,rb369280,t0,tb50688,f1792,w0,o0,bl0,d822)
CLOSE-WAIT 146    0      ndmzlb5:8080               monitoring-system:39492              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d0)
CLOSE-WAIT 518    0      ndmzlb5:443                external-gateway2:49168              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d1591)
CLOSE-WAIT 135    0      ndmzlb5:8080               monitoring-system:35044              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d0)
CLOSE-WAIT 518    0      ndmzlb5:443                external-gateway2:50850              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d1591)
CLOSE-WAIT 135    0      ndmzlb5:8080               monitoring-system:60970              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d0)
CLOSE-WAIT 1      0      ndmzlb5:37186              backend-server3:30080               users:(("haproxy",pid=23923,fd=43))
	 skmem:(r768,rb367360,t0,tb87040,f3328,w0,o0,bl0,d0)
CLOSE-WAIT 518    0      ndmzlb5:443                external-gateway1:51177              
	 skmem:(r2304,rb369280,t0,tb87040,f1792,w0,o0,bl0,d1591)
LISTEN     0      128         :::22                      :::*                   users:(("sshd",pid=9054,fd=4))
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d0)
LISTEN     0      100        ::1:25                      :::*                   users:(("master",pid=9512,fd=14))
	 skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d0)

I attached strace for about a minute to the haproxy which produced a file of 171MB in size but i contains only repetitive entries as the following until restart of the HAProxy process:

23929 07:59:03.354485 <... sched_yield resumed> ) = 0


23923 07:59:03.358101 sched_yield( <unfinished ...>
23930 07:59:03.358128 sched_yield( <unfinished ...>
23929 07:59:03.358144 <... sched_yield resumed> ) = 0
23925 07:59:03.358172 sched_yield( <unfinished ...>
23924 07:59:03.358194 <... sched_yield resumed> ) = 0
23923 07:59:03.358213 <... sched_yield resumed> ) = 0
23930 07:59:03.358250 <... sched_yield resumed> ) = 0
23929 07:59:03.358271 sched_yield( <unfinished ...>
23924 07:59:03.358300 sched_yield( <unfinished ...>
23923 07:59:03.358322 sched_yield( <unfinished ...>
23929 07:59:03.358362 <... sched_yield resumed> ) = 0
23927 07:59:03.358392 sched_yield( <unfinished ...>
23926 07:59:03.358415 sched_yield( <unfinished ...>
23924 07:59:03.358433 <... sched_yield resumed> ) = 0
23923 07:59:03.358453 <... sched_yield resumed> ) = 0
23930 07:59:03.358498 sched_yield( <unfinished ...>
23929 07:59:03.358518 sched_yield( <unfinished ...>
23927 07:59:03.358546 <... sched_yield resumed> ) = 0
23926 07:59:03.358570 <... sched_yield resumed> ) = 0
23924 07:59:03.358589 sched_yield( <unfinished ...>
23923 07:59:03.358608 --- SIGTERM {si_signo=SIGTERM, si_code=SI_USER, si_pid=23922, si_uid=0} ---
23930 07:59:03.364157 <... sched_yield resumed> ) = ? <unavailable>
23929 07:59:03.364227 +++ killed by SIGTERM +++
23930 07:59:03.364239 +++ killed by SIGTERM +++
23928 07:59:03.364247 +++ killed by SIGTERM +++
23927 07:59:03.364256 +++ killed by SIGTERM +++
23925 07:59:03.364264 +++ killed by SIGTERM +++
23924 07:59:03.364273 +++ killed by SIGTERM +++
23926 07:59:03.371111 +++ killed by SIGTERM +++
23923 07:59:03.371155 +++ killed by SIGTERM +++

I also checked our backend systems and found on one system shortly before the HAProxy stopped responding (couple of seconds) that this system logged a reset of its network interface:

Feb 10 23:53:02 backend-server3 kernel: e1000 0000:02:02.0 eno16780032: Reset adapter
Feb 10 23:53:02 backend-server3 kernel: e1000: eno16780032 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Feb 10 23:53:02 backend-server3 NetworkManager[864]: <info>  [1549838222.3110] device (eno16780032): link connected

On Friday there is no reset of a network interface on any of the backend hosts or on the HAProxy, but the cpu utilization drops not under ~ 19%. I think the reset of the interface on Saturday is linked to the behaviour to the crash of the HAProxy on ndmzlb5 maybe the process stucks in a loop and can’t return.

Any idea what I can do next or what causes this behaviour?

Hello,

well a backend server that goes down is certainly an event that could trigger a bug in haproxy.

The strace is not very useful and I don’t think we have a good understanding of what happens here. Also I don’t see a lot of changes between 1.8.14 and current 1.8.18, but I’d suggest anyway:

  • upgrade to 1.8.18 (you do want to benefit from the all those bugfixes anyway)
  • set haproxy up with a local admin socket, and when it spinns again at 100% CPU load, provide the output of show sess from the admin socket - this will help understand those sessions in CLOSE_WAIT. That output is likely huge.

@willy any other ideas at this point?

This is interesting, the sched_yield() is only called with threads in the synchronization point. There were actually two issues fixed in 1.8.15 affecting the sync point, one of which could cause exactly this depending on compiler’s optimizations [ 4805c24 (“BUG/MEDIUM: threads: make sure threads_want_sync is marked volatile”) ] . And interestingly I’m pretty sure I faced the same pattern when ending on that bug.

Let’s update to 1.8.19 and see if it happens again, I’d bet it will not.

1 Like

Good morning,

thank you for your support lukastribus and willy. I have updated our HAProxies to 1.8.19 this morning.
Unfortunately I’m not able to reproduce this behaviour by myself, so I have to wait to see if it happens again.

Thank you again.

HA-Proxy version 1.8.19 2019/02/11
Copyright 2000-2019 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
  OPTIONS = USE_ZLIB=1 USE_THREAD=1 USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace

Good morning

Our problem seems to be solved after updating to 1.8.19, so far the described problem did not occur again.
Thank you for your support!

Andreas

1 Like

Great, thanks for your feedback!