Websockets transition during graceful reload

My problem is that I use websockets with stick-tables and need to periodically update the configuration using external tool. As I understand this process, I will have several PIDs while connections are established, but this doesn’t work as described. Each time I reload the configuration, websocket connection is being dropped.

Reload command:

kill -USR2 `cat /etc/haproxy/haproxy.pid`

Configuration snippet:

global
  pidfile /etc/haproxy/haproxy.pid
  master-worker
  mworker-max-reloads 20
  log stdout local0
  maxconn 500000
  stats socket /var/run/haproxy.sock mode 660 level admin
  stats timeout 30s
  nbproc 1
  nbthread 2
  cpu-map auto:1/1-2 0-1

defaults
  mode http
  log global
  timeout connect 5s
  timeout client 10s
  timeout server 60s
  timeout client-fin 1s
  timeout server-fin 1s
  timeout http-request 10s
  timeout http-keep-alive 300s

  option httplog
  option redispatch
  option dontlognull
  option forwardfor

peers local
  peer haproxy haproxy:1024

frontend stats
  bind :32600
  option http-use-htx
  option dontlog-normal
  http-request use-service prometheus-exporter if { path /metrics }
  stats enable
  stats uri /
  stats refresh 20s

frontend https-in
  bind *:8433 accept-proxy
  mode http

  # Define hosts
  acl host hdr(host) -i doman.com

  ## figure out which one to use
  use_backend servers if host

frontend http-in
  bind :8080 accept-proxy
  mode http
  redirect scheme https code 301 if !{ ssl_fc }

backend servers
  http-request set-header X-Real-IP %[src];
  http-request set-header X-Forwarded-For %[src];
  http-request set-header X-Forwarded-Proto %[src];
  http-request set-header Connection "upgrade"
  http-request set-header Host %[src]

  balance leastconn
  stick-table type string len 80 size 1m expire 8h peers local
  stick on url_param(mrid)

  timeout server  120s
  server1 adress1
  server2 adress2
  ...

Can you help to find if there are another way of graceful termination? Or maybe any thoughts on this topic?

We have fixed this issue. Actully, the problem was that we haven’t set tunnel timeout. WS pings weren’t able to keep up with client timeout we have set.

1 Like