Layer4 health check failures while using option http-check

HI There,

I have 3 backend galera servers configured.

global
  log /dev/log local0
  log /dev/log local1 notice
  user root
  group root
  daemon
  ca-base /etc/ssl/certs
  crt-base /etc/ssl/private
  ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
  ssl-default-bind-options no-sslv3
  ssl-default-server-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
  ssl-default-server-options no-sslv3
  stats socket /run/haproxy.sock mode 660 level admin
defaults
  log global
  option dontlognull
  option redispatch
  option                  tcp-smart-accept
  option                  tcp-smart-connect
  timeout connect 5s
  timeout client 480m
  timeout server 480m
  timeout http-keep-alive 1s
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m

frontend mysql
  bind <IP>:3306
  mode tcp
  option tcplog
  default_backend mysql_nodes

backend mysql_nodes
  mode tcp
  balance leastconn
  option tcp-check
  option httpchk
  server mysql-1 <IP1>:3306 backup check port 9200  maxconn 1500 inter 1s fall 5 rise 2
  server mysql-2 <IP2>:3306 check port 9200  maxconn 1500 inter 1s fall 5 rise 2
  server mysql-3 <IP3>:3306 check port 9200  maxconn 1500 inter 1s fall 5 rise 2

I have set up health check on port 9200 with Xinetd and scripts.
I could see in the log that the layer4 checks are failing and layer7 checks passing fine.

Server mysql_nodes/mysql-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 133ms. 1 active and 1 backup servers left. 2 sessions active, 0 requeued, 0 remaining in queue.
Server mysql_nodes/mysql-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 133ms. 1 active and 1 backup servers left. 2 sessions active, 0 requeued, 0 remaining in queue.
Server mysql_nodes/mysql-2 is UP, reason: Layer7 check passed, code: 200, check duration: 457ms. 2 active and 1 backup servers online. 0 sessions requeued, 0 total in queue.
Server mysql_nodes/mysql-2 is UP, reason: Layer7 check passed, code: 200, check duration: 457ms. 2 active and 1 backup servers online. 0 sessions requeued, 0 total in queue.

This configures only layer7 checks right? Is there a way in the configuration file to disable the layer4 checks?
I tried with tcp-check and mysql-check options but i am getting the same results.
Thanks for any help!!

Sounds like you may have another haproxy running in the background with an old configuration.

Can you check and kill old processes if any? Can you check haproxy PIDs in the log?

option tcp-check enables layer4 checks, this needs to be removed.

Thanks for the reply!!

There is no old haproxy process running in the background. There are 2 PIDs created by the haproxy service.

I am not using option tcp-check in the configuration.

Ok, can you provide a tcpdump ( tcpdump -i ethX -pns0 -w health-check-traffic.cap host <IP1> and port 9200 ) of the entire health check traffic as well as the output of haproxy -vv.

I am out for this week, i will provide it first thing on Monday.
Thanks for the help!!

Hi,

I checked the tcpdump, although the script is sending “200 ok” there was lot of tcp retransmission and reset in the communication. I am trying to replace the bash script with a python script. I will update after making the changes.

Hi @lukastribus ,
Sorry for the late reply. I tried different healthcheck scripts, still getting the healthcheck failures.
please find the capture.

The HTTP traffic looks fine but there is a lot of TCP RST.

The response is 30 bytes long, but the Content-Length specifies 40 bytes. That is therefore certainly an invalid response.

Thank you for the response. I don’t know what is causing this. Let me check and get back.

Hi @lukastribus ,
Content length is set to 30 now. Still there are layer4 failures and TCP RST packets. Could you please take a look?

Right, the content length is actually 32 bytes, because after the trailing \r\n.

Hi @lukastribus ,
It’s the same with content length 32 bytes as well.

Not sure what happens here. Can you provide the output of haproxy -vv as well as run haproxy in debug mode manually (haproxy -f /path/to/config -d).

Also can you confirm the configuration in the first post is still what you are running?

Thanks

Here are the details

$ haproxy -vv
HAProxy version 2.8.0-fdd8154 2023/05/31 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.0.html
Running on: Linux 4.18.0-240.el8.x86_64 #1 SMP Fri Sep 25 19:48:47 UTC 2020 x86_64
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_GETADDRINFO=1 USE_OPENSSL=yes USE_LUA=yes USE_ZLIB=yes USE_SYSTEMD=yes USE_PROMEX=yes USE_PCRE=yes USE_PCRE_JIT=yes
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT +PCRE_JIT +POLL +PRCTL -PROCCTL +PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN -SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL +ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=8).
Built with OpenSSL version : OpenSSL 3.0.9 30 May 2023
Running on OpenSSL version : OpenSSL 3.0.9 30 May 2023
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with Lua version : Lua 5.3.6
Built with the Prometheus exporter as a service
Built with network namespace support.
Built with zlib version : 1.2.13
Running on zlib version : 1.2.13
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE version : 8.45 2021-06-15
Running on PCRE version : 8.45 2021-06-15
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 12.3.0

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
         h2 : mode=HTTP  side=FE|BE  mux=H2    flags=HTX|HOL_RISK|NO_UPG
       fcgi : mode=HTTP  side=BE     mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
  <default> : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
         h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
  <default> : mode=TCP   side=FE|BE  mux=PASS  flags=
       none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : prometheus-exporter
Available filters :
	[BWLIM] bwlim-in
	[BWLIM] bwlim-out
	[CACHE] cache
	[COMP] compression
	[FCGI] fcgi-app
	[SPOE] spoe
	[TRACE] trace
$ sudo haproxy -f /etc/haproxy/haproxy.conf -d
Note: setting global.maxconn to 131046.
Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
	[BWLIM] bwlim-in
	[BWLIM] bwlim-out
	[CACHE] cache
	[COMP] compression
	[FCGI] fcgi-app
	[SPOE] spoe
	[TRACE] trace
Using epoll() as the polling mechanism.
[WARNING]  (12136) : Server mysql_nodes/mysql-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 183ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING]  (12136) : Server mysql_nodes/mysql-3 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 183ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING]  (12136) : Server mysql_nodes/mysql-2 is UP, reason: Layer7 check passed, code: 200, check duration: 610ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING]  (12136) : Server mysql_nodes/mysql-1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 159ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING]  (12136) : Server mysql_nodes/mysql-3 is UP, reason: Layer7 check passed, code: 200, check duration: 610ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

Also can you confirm the configuration in the first post is still what you are running?
Yes , I am running the same configuration

I have recreated the situation with your exact release and configuration and I don’t see any issues.

However looking at your traces again it becomes clear that I was too focused on the HTTP transaction. The haproxy log was correct all along and the traces confirm it:

Sometimes the Port 9200 responds (and then the health check succeeds), and sometimes the TCP handshake to port 9200 is just flat out rejected (SYN → RST, ACK).

It looks like your health check script is unable to continuously respond.

Thanks for the reply. I will replace the script and check.

Hi ,
The issue is fixed with the new python script. Thanks for the help !!

1 Like