Performance issue with SSL Passthrough?

I’m trying to use HAproxy in front of several nginx/php servers to host a few dozen websites. I was hoping to use the SSL Passthrough approach with TCP mode to keep the load balancer lightweight and leave the connections encrypted all the way back to the web servers.

I’m currently using two small 1 vCpu / 1 GB Ram DigitalOcean droplets with Ubuntu 24.04. I’ll call one the load balancer (HAproxy) and the other the web server (Nginx/PHP).

I have a fairly stock setup with the exception of handling the backend assignment with a map file to make it easier to manage. My haproxy.cfg file looks like this:

defaults unnamed_defaults_1
  mode http
  log global
  option httplog
  option dontlognull
  timeout connect 5000
  timeout client 10000
  timeout queue 10000
  timeout server 10000
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http

frontend https from unnamed_defaults_1
  mode tcp
  maxconn 5000
  bind *:443 name https_*_443
  use_backend %[req.ssl_sni,lower,map(/etc/haproxy/maps/hosts.map,webservers)]
  default_backend webservers

backend website1 from unnamed_defaults_1
  mode tcp
  balance roundrobin
  server webserver 10.0.0.1:443 check maxconn 200

In testing my setup everything worked well except I was noticing random 525 SSL Handshake Failed errors only when I quickly refreshed pages that spawned extra Javascript and CSS requests.

I first thought this was an issue with Nginx since there weren’t any errors logged with HAproxy and there was hardly any load on the server. I also noticed in HATop that when I triggered a 525 error in my browser it would register as an ECONN for the backend. Long story short I looked over the web server and increased log levels but never saw any error messages for ssl connection failures. I decided to remove the load balancer from the mix and just go straight to the web server and this eliminated the random 525 errors.

I wanted to try putting the load balancer back in the mix but terminate SSL with HAproxy to see if that also generated the random connection errors but it worked without issue. My updates to haproxy.cfg were just changing mode from tcp to http and communicating on port 80 instead of 443.

frontend https from unnamed_defaults_1
  mode http
  maxconn 5000
  bind *:443 ssl crt /etc/haproxy/ssl/
  use_backend %[req.ssl_sni,lower,map(/etc/haproxy/maps/hosts.map,webservers)]
  default_backend webservers

backend website1 from unnamed_defaults_1
  mode http
  balance roundrobin
  server webserver 10.0.0.1:80 check maxconn 200

At this point I decided to get some better comparison metrics with a simple apache benchmark (ab -n 100 -c 20)

1. Loadbalancer w/ SSL Passthrough - TCP Mode to backend
Requests per second: 4
Non-2xx responses: 20 (20% failure rate)

2. No Loadbalancer - Direct to Nginx
Requests per second: 20
Non-2xx responses: 0

3. Loadbalancer w/ SSL Termination - HTTP mode to backend
Requests per second: 27
Non-2xx responses: 0

I’ve tried researching this issue but I just can’t find any explanation for why the SSL Passthrough/TCP mode approach is producing so many errors and such poor performance. I tried lowering timeout values because I thought connections were maybe kept open too long and resources were being exhausted and that lead to connection errors. But I had identical settings with SSL termination so I don’t think that theory holds up.

I apologize if this is something stupid I’ve overlooked but any help at all would be greatly appreciated.

——

haproxy -vvv output

HAProxy version 2.8.5-1ubuntu3.3 2025/04/09 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.5.html
Running on: Linux 6.8.0-51-generic #52-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec  5 13:09:44 UTC 2024 x86_64
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = cc
  CFLAGS  = -O2 -g -O2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -flto=auto -ffat-lto-objects -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fdebug-prefix-map=/build/haproxy-btl1fH/haproxy-2.8.5=/usr/src/haproxy-2.8.5-1ubuntu3.3 -Wdate-time -D_FORTIFY_SOURCE=3 -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1 USE_SYSTEMD=1 USE_QUIC=1 USE_PROMEX=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_QUIC_OPENSSL_COMPAT=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PROCCTL +PROMEX -PTHREAD_EMULATION +QUIC +QUIC_OPENSSL_COMPAT +RT +SHM_OPEN +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=1).
Built with OpenSSL version : OpenSSL 3.0.13 30 Jan 2024
Running on OpenSSL version : OpenSSL 3.0.13 30 Jan 2024
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with Lua version : Lua 5.4.6
Built with the Prometheus exporter as a service
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.42 2022-12-11
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 13.3.0

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
       quic : mode=HTTP  side=FE     mux=QUIC  flags=HTX|NO_UPG|FRAMED
         h2 : mode=HTTP  side=FE|BE  mux=H2    flags=HTX|HOL_RISK|NO_UPG
       fcgi : mode=HTTP  side=BE     mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
         h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
  <default> : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
       none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG
  <default> : mode=TCP   side=FE|BE  mux=PASS  flags=

Available services : prometheus-exporter
Available filters :
	[BWLIM] bwlim-in
	[BWLIM] bwlim-out
	[CACHE] cache
	[COMP] compression
	[FCGI] fcgi-app
	[SPOE] spoe
	[TRACE] trace

The first thing you need to fix is the req.ssl_sni ACL. It cannot reliably work without waiting for the client hello:

As per

this should be:

# Wait for a client hello for at most 5 seconds
tcp-request inspect-delay 5s
tcp-request content accept if { req.ssl_hello_type 1 }
use_backend %[req.ssl_sni,lower,map(/etc/haproxy/maps/hosts.map,webservers)]

Regarding the performance issue: this like comes from the lack of global maxconn setting or too low maxconn server settings.

If you have a frontend with maxconn 5000 and another total 5000 of maxconn on the server lines, you need at least a global maxconn 10000.

global
 maxconn 10000

So you need to consider all 3:

  • global maxconn (per process maxconn counting both frontend and backend connections)
  • frontend maxconn
  • per server maxconn

Thank you so much!

I added the block to wait for a client hello and re-ran my ab test. All responses are 200 and it’s processing 100 req/s. I can’t believe I overlooked these lines from the docs but I’m so grateful that you found this solution for me!

I will go back and look at all of my maxconn settings and make sure they are correct, thank you for pointing this out.

1 Like