Can't connect to backend after haproxy reload because of long living tcp connection on old worker

Hello,

I have long-living TCP connections via HTTPS in my environment.
When I reload HAProxy, a new worker is created, but the old long-living TCP connections on the old worker prevent new connections to the same backend from being established on the new worker. Is there a solution to this without forcing the long-living TCP connections to close?

I use HAProxy 3.0 LTS.

I know that i can do some stuff with 2 nodes and active-passiv in combination with KeepAliveD. But our setup is in azure, so it is a bit more complex and i hope that there is a solution for haproxy itself.

Why is that the case?

You are asking how to solve a problem without explaining the most fundamental part of it.

Hi lukastribus,

thanks for your reply.

We use legacy software that performs imports and exports in some places, which take 5-10 minutes. We do not want to interrupt these imports and exports during reloads. Basically, everything communicates via HTTPS.
HTTPS requests are assigned to the various backends in HAProxy using ACLs.

I performed the following test:

  • Call to service A with a runtime of 60 seconds.
  • HAProxy reload
  • Connection to Service A remains intact
  • New connection to Service A not possible, but Service B (another service accessible via the same HAProxy) is possible
  • New connection to Service A is only possible once the old connection has been terminated.

So in other words service A can only handle a single HTTP transaction at the time.

I’m afraid there is no magic bullet here. Haproxy cannot make service A handle multiple transaction simultaneously.

i don’t know how you conclude that but i can assure you, service a can handle more then one transaction :slight_smile:

The same Issue would appear with Service B or C when their is a long living connection.

I’m concluding that because the new haproxy instances doesn’t know that the old haproxy instance still has a in flight transaction.

Do you have logs of the new haproxy instance showing the delay?

Can you access service A directly while the new haproxy instance is running?

And finally can you share the configuration you are using?

Do you have logs of the new haproxy instance showing the delay?

Unfortunately not, as I don’t know where to look for it. I’m still fairly new to HAProxy.

Can you access service A directly while the new haproxy instance is running?

Yes, when i access the service with its hostname it is reachable while haproxy is reloading and both instances, old and new, are running.

And finally can you share the configuration you are using?

sure.

expose-fd listeners i added in stats socket in my troubleshooting because i found an haproxy article where it says this is needed for a seamless reload.

timeout http-keep-alive 2s was a troubleshooting test.

The # Set Headers-acls i use because HAProxy is located behind an azure appgw which messes around with the HTTP Headers and we want the basic reverse proxy behavior back.

global
        log /dev/log    local0
        log /dev/log    local1 debug

        # Increases Log Level Details
        log-send-hostname

        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base         /etc/ssl/certs
        crt-base        /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        # TLS config up to TLS1.2
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SH>

        # TLS config for TLS1.3
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256

        # Set TLS min Version
        ssl-default-bind-options ssl-min-ver TLSv1.2

defaults
        log             global
        mode            http
        option          httpslog
        option          log-health-checks
        option          forwardfor
        timeout         connect 10s
        timeout         client  60s
        timeout         server  60s

        timeout         http-keep-alive 2s

        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

#------------------#
# HAProxy Backends #
#------------------#
# Own Healthcheck
listen Page_healthcheck
        bind *:8101 ssl crt /etc/haproxy/ssl/
        monitor-uri     /healthcheck

# HealthCheck WebSite
listen Page_Server_Status
        bind *:8404 ssl crt /etc/haproxy/ssl/
        stats enable
        stats uri /server-status
        stats refresh 10s

#-----------#
# Frontends #
#-----------#
frontend https
        bind *:443 ssl crt /etc/haproxy/ssl/

        # Set Headers
        acl             h_xoh_exists req.hdr(X-Original-Host) -m found
        http-request    set-header X-Original-Host %[req.hdr(Host)] unless h_xoh_exists

        acl             h_xfh_exists req.hdr(X-Forwarded-Host) -m found
        http-request    set-header X-Forwarded-Host %[req.hdr(Host)] unless h_xfh_exists

        acl             h_xfport_exists req.hdr(X-Forwarded-Proto) -m found
        http-request    set-header X-Forwarded-Proto https unless h_xfport_exists

        # ACL fĂźr den Hostnamen
        acl             is_service_a hdr(host) -i host.sub.domain.tld
        use_backend     service_a if is_service_a

      
#----------#
# Backends #
#----------#
backend service_a
        mode            http
        option          httpchk
        http-check      connect ssl port 443 sni service_a.sub.domain.tld
        http-check      send meth HEAD uri /healthcheck ver HTTP/2 hdr host azure_cae.sub.domain.tld
        http-request    set-header host azure_cae.sub.domain.tld
        server          service_a azure_cae.sub.domain.tld:443 check inter 60s ssl verify required ca-file /etc/ssl/certs


This sounds like a miss-configured process manager more than anything else.

Can you confirm you are reloading haproxy via systemd, by issuing systemctl reload haproxy ?

Please provide:

  • the output of haproxy -vv
  • the output of systemctl status haproxy
  • a full cat of the systemd unit file as shown in the previous commands (or all files if multiple are referred to)
  • information about how haproxy was installed and upgraded on this machine (whether it was installed from source or with a package manager)

Hi Lukas,

Can you confirm you are reloading haproxy via systemd, by issuing systemctl reload haproxy ?

yes, this is correct.

the output of haproxy -vv

this is the current installation i wanted to use (3.0 LTS):

root@haproxy-02:/home/localadm# haproxy -vv
HAProxy version 3.0.11-1~bpo12+1 2025/06/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2029.
Known bugs: http://www.haproxy.org/bugs/bugs-3.0.11.html
Running on: Linux 6.1.0-38-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.147-1 (2025-08-02) x86_64
Build options :
  TARGET  = linux-glibc
  CC      = x86_64-linux-gnu-gcc
  CFLAGS  = -O2 -g -fwrapv -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2
  OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1 USE_SYSTEMD=1 USE_OT=1 USE_QUIC=1 USE_PROMEX=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_QUIC_OPENSSL_COMPAT=1
  DEBUG   =

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_AWSLC -OPENSSL_WOLFSSL +OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PROCCTL +PROMEX -PTHREAD_EMULATION +QUIC +QUIC_OPENSSL_COMPAT +RT +SHM_OPEN +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 3.0.15 3 Sep 2024
Running on OpenSSL version : OpenSSL 3.0.17 1 Jul 2025 (VERSIONS DIFFER!)
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
												
								
Built with Lua version : Lua 5.4.4
Built with the Prometheus exporter as a service
Built with network namespace support.
Built with OpenTracing support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.42 2022-12-11
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 12.2.0

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
       quic : mode=HTTP  side=FE     mux=QUIC  flags=HTX|NO_UPG|FRAMED
         h2 : mode=HTTP  side=FE|BE  mux=H2    flags=HTX|HOL_RISK|NO_UPG
  <default> : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
         h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
       fcgi : mode=HTTP  side=BE     mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
																	
																	
  <default> : mode=TCP   side=FE|BE  mux=PASS  flags=
       none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : prometheus-exporter
Available filters :
        [BWLIM] bwlim-in
        [BWLIM] bwlim-out
        [CACHE] cache
        [COMP] compression
        [FCGI] fcgi-app
        [  OT] opentracing
        [SPOE] spoe
        [TRACE] trace

but i also have one node with 3.2 LTS to check if the issue is gone with the latest version. I saw that 3.2 this is the latest LTS release:

root@haproxy-01:/home/localadm# haproxy -vv
HAProxy version 3.2.5-1~bpo12+1 2025/09/26 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2030.
Known bugs: http://www.haproxy.org/bugs/bugs-3.2.5.html
Running on: Linux 6.1.0-38-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.147-1 (2025-08-02) x86_64
Build options :
  TARGET  = linux-glibc
  CC      = x86_64-linux-gnu-gcc
  CFLAGS  = -O2 -g -fwrapv -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2
  OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1 USE_OT=1 USE_QUIC=1 USE_PROMEX=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_QUIC_OPENSSL_COMPAT=1
  DEBUG   =

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_AWSLC -OPENSSL_WOLFSSL +OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PROCCTL +PROMEX -PTHREAD_EMULATION +QUIC +QUIC_OPENSSL_COMPAT +RT +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB +ACME

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=32, MAX_THREADS=1024, default=2).
Built with SSL library version : OpenSSL 3.0.17 1 Jul 2025
Running on SSL library version : OpenSSL 3.0.17 1 Jul 2025
SSL library supports TLS extensions : yes
SSL library supports SNI : yes
SSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
QUIC: connection socket-owner mode support : yes
QUIC: GSO emission support : yes
Built with Lua version : Lua 5.4.4
Built with the Prometheus exporter as a service
Built with network namespace support.
Built with OpenTracing support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.42 2022-12-11
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 12.2.0

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
       quic : mode=HTTP  side=FE     mux=QUIC  flags=HTX|NO_UPG|FRAMED
         h2 : mode=HTTP  side=FE|BE  mux=H2    flags=HTX|HOL_RISK|NO_UPG
  <default> : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
         h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
       fcgi : mode=HTTP  side=BE     mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
  <default> : mode=SPOP  side=BE     mux=SPOP  flags=HOL_RISK|NO_UPG
       spop : mode=SPOP  side=BE     mux=SPOP  flags=HOL_RISK|NO_UPG
  <default> : mode=TCP   side=FE|BE  mux=PASS  flags=
       none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : prometheus-exporter
Available filters :
        [BWLIM] bwlim-in
        [BWLIM] bwlim-out
        [CACHE] cache
        [COMP] compression
        [FCGI] fcgi-app
        [  OT] opentracing
        [SPOE] spoe
        [TRACE] trace

the output of systemctl status haproxy

root@haproxy-01:/home/localadm# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-09-29 13:32:02 UTC; 1 week 0 days ago
       Docs: man:haproxy(1)
             file:/usr/share/doc/haproxy/configuration.txt.gz
    Process: 3758559 ExecReload=/usr/sbin/haproxy -Ws -f $CONFIG -c $EXTRAOPTS (code=exited, status=0/SUCCESS)
    Process: 3758561 ExecReload=/bin/kill -USR2 $MAINPID (code=exited, status=0/SUCCESS)
   Main PID: 3148053 (haproxy)
     Status: "Ready."
      Tasks: 3 (limit: 4685)
     Memory: 80.3M
        CPU: 3min 43.709s
     CGroup: /system.slice/haproxy.service
             ├─3148053 /usr/sbin/haproxy -sf 3148144 -x sockpair@4 -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.p>
             └─3758563 /usr/sbin/haproxy -sf 3148144 -x sockpair@4 -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.p>

Oct 07 09:04:24 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:60502 [07/Oct/2025:09:04:09.>
Oct 07 09:04:24 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:60502 [07/Oct/2025:09:04:09.>
Oct 07 09:04:39 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:61056 [07/Oct/2025:09:04:24.>
Oct 07 09:04:39 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:61056 [07/Oct/2025:09:04:24.>
Oct 07 09:04:54 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:61570 [07/Oct/2025:09:04:39.>
Oct 07 09:04:54 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:61570 [07/Oct/2025:09:04:39.>
Oct 07 09:05:09 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:62236 [07/Oct/2025:09:04:54.>
Oct 07 09:05:09 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:62236 [07/Oct/2025:09:04:54.>
Oct 07 09:05:24 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:62819 [07/Oct/2025:09:05:09.>
Oct 07 09:05:24 haproxy-01 haproxy[3758563]: haproxy-01 haproxy[3758563]: 168.63.129.16:62819 [07/Oct/2025:09:05:09.

a full cat of the systemd unit file as shown in the previous commands (or all files if multiple are referred to)

Do you mean this file?

root@haproxy-01:/home/localadm# cat /lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
Documentation=man:haproxy(1)
Documentation=file:/usr/share/doc/haproxy/configuration.txt.gz
After=network-online.target rsyslog.service
Wants=network-online.target

[Service]
EnvironmentFile=-/etc/default/haproxy
EnvironmentFile=-/etc/sysconfig/haproxy
BindReadOnlyPaths=/dev/log:/var/lib/haproxy/dev/log
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" "EXTRAOPTS=-S /run/haproxy-master.sock"
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS
ExecReload=/usr/sbin/haproxy -Ws -f $CONFIG -c $EXTRAOPTS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always
SuccessExitStatus=143
Type=notify

# The following lines leverage SystemD's sandboxing options to provide
# defense in depth protection at the expense of restricting some flexibility
# in your setup (e.g. placement of your configuration files) or possibly
# reduced performance. See systemd.service(5) and systemd.exec(5) for further
# information.

# NoNewPrivileges=true
# ProtectHome=true
# If you want to use 'ProtectSystem=strict' you should whitelist the PIDFILE,
# any state files and any other files written using 'ReadWritePaths' or
# 'RuntimeDirectory'.
# ProtectSystem=true
# ProtectKernelTunables=true
# ProtectKernelModules=true
# ProtectControlGroups=true
# If your SystemD version supports them, you can add: @reboot, @swap, @sync
# SystemCallFilter=~@cpu-emulation @keyring @module @obsolete @raw-io

[Install]
WantedBy=multi-user.target

information about how haproxy was installed and upgraded on this machine (whether it was installed from source or with a package manager)

I installed and upgraded HAProxy via ansible. For the upgrade to 3.2 i just changed the repo to add and the version to install.

- name: Install HAProxy 3.0-stable (LTS)
  block:
    - name: Add repo using key from URL
      ansible.builtin.deb822_repository:
        name: haproxy
        types: deb
        uris: http://haproxy.debian.net
        suites: bookworm-backports-3.0
        components: main
        architectures: amd64
        signed_by: https://haproxy.debian.net/bernat.debian.org.gpg

    - name: Apt update
      ansible.builtin.apt:
        update_cache: true
        force_apt_get: true
        cache_valid_time: 3600

    - name: Install HAProxy 3.0.x
      ansible.builtin.apt:
        name: haproxy=3.0.*
        state: present

Nothing immediately jumps to mind.

Lets first of all enable proper logging.

If you don’t know where you /dev/log ends up in, I suggest you enable parallel logging to an external syslog server, to do this add:

log 192.168.0.X:514 syslog debug

to your global sections. Your defaults section already contains proper logging setup.

A couple of more questions:

You are saying then when the connection to Service A remains intact through the old haproxy instance that is being stopped:

New connection to Service A not possible

I need you to explain exactly what this entails. Does the connection to haproxy establish, the request is send, but you do not get an answer - in other words the transaction hangs?

Or does it means the connection fails?

What is the error exactly?

Ideally you’d reproduce this the following way:

  • a curl request going to server A through the current (about to be old) haproxy instance
  • you reload haproxy
  • another curl request in verbose mode -vv goes to service A through the new haproxy instance
  • another curl request in verbose mode -vv goes directly to service A bypassing haproxy completely

After this, post the outputs of all curls calls as well as the logs you just enabled. Also check systemd logging during the reloads and after by issueing the journalctl -u haproxy command.

Hello Lukas,

I could have figured out the logs earlier…

In the logs, we can see that the health checks are processed one after the other after a reload and that, logically, no backend is healthy during this time and therefore no traffic is routed to the respective backend.

(Background: The configuration that was posted was a compressed configuration with only one backend from a test system and not the one from the production environment where we copied the backend and ACL configuration several times with different FQDNs).

To work around this behaviour, we found that we could write the health status to a file (server-state-file) and load it after a reload (load-server-state-from-file). However, this does not work in the production environment.

Then we saw that the Init-State feature for backends was released with HAProxy 3.1. However, that does not help us either.
After every reload and restart, HAProxy wants its successful health checks before it routes traffic to the backend.

Attached is the current config.

global
        log /dev/log    local0
        log /dev/log    local1 debug

        # Increases Log Level Details
        log-send-hostname

        chroot /var/lib/haproxy
        #server-state-file /var/lib/haproxy/server-state
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base         /etc/ssl/certs
        crt-base        /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        # TLS config up to TLS1.2
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384

        # TLS config for TLS1.3
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256

        # Set TLS min Version
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log                             global
        #load-server-state-from-file     global
        mode                            http
        option                          httpslog
        option                          log-health-checks
        option                          forwardfor
        timeout                         connect 10s
        timeout                         client  60s
        timeout                         server  60s
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

#------------------#
# HAProxy Backends #
#------------------#
# Own Healthcheck
listen Page_healthcheck
        bind *:8101 ssl crt /etc/haproxy/ssl/
        monitor-uri     /healthcheck

# HealthCheck WebSite
listen Page_Server_Status
        bind *:8404 ssl crt /etc/haproxy/ssl/
        stats enable
        stats uri /server-status
        stats refresh 10s

#-----------#
# Frontends #
#-----------#
frontend https
        bind *:443 ssl crt /etc/haproxy/ssl/

        # Set Headers
        acl             h_xoh_exists req.hdr(X-Original-Host) -m found
        http-request    set-header X-Original-Host %[req.hdr(Host)] unless h_xoh_exists

        acl             h_xfh_exists req.hdr(X-Forwarded-Host) -m found
        http-request    set-header X-Forwarded-Host %[req.hdr(Host)] unless h_xfh_exists

        acl             h_xfport_exists req.hdr(X-Forwarded-Proto) -m found
        http-request    set-header X-Forwarded-Proto https unless h_xfport_exists

        # ACL fĂźr den Hostnamen
        acl             is_example hdr(host) -i service.domain.tld
        use_backend     example if is_example


#----------#
# Backends #
#----------#
<...more backends with the same config...>

backend example
        mode            http
        option          httpchk
        http-check      connect ssl port 443 sni service.internal.domain.tld
        http-check      send meth HEAD uri /healthcheck ver HTTP/2 hdr host service.internal.domain.tld
        http-request    set-header host service.internal.domain.tld
        server          bavp_test service.internal.domain.tld:443 check init-state up inter 60s ssl verify required ca-file /etc/ssl/certs

<...more backends with the same config...>

To conclude the topic:
With Lukas’ help, we were able to analyse and identify the problem.
When restarting the HAProxy service, the backends were initially considered ‘down’ because we only provide the SNIs during the health check itself, but not when calling the backend.
The problem was solved by Lukas’ suggestion to include

sni str(example.backend.tld)

in the backend call.
Our backend configuration now looks like this:

backend example
        mode            http
        option          httpchk
        timeout         connect 20s
        timeout         server 300s
        http-check      connect ssl port 443 sni example.backend.tld
        http-check      send meth HEAD uri /healthcheck ver HTTP/2 hdr host example.backend.tld
        http-request    set-header host example.backend.tld
        server          example example.backend.tld:443 check inter 60s ssl verify required ca-file /etc/ssl/certs sni str(example.backend.tld)

Since then, we have had no further problems, and reloading the HAProxy service no longer causes connection interruptions.
Lukas also informed me that this behaviour has been classified as a bug:

In Haproxy 3.3 will be an auto SNI Feature.

Thank you Lukas for your time and patience.

BG Jan

1 Like