How To Scale SSL with HAProxy and Nginx

I recently had to proxy SSL traffic over TCP while forwarding the client’s original IP addresses, and wrote a blog post describing what I learned:

Hopefully it helps anyone setting up haproxy to load balance SSL termination while keeping the correct client IP in your server logs.

3 Likes

This is good info. I’m aware of this directive but never really used.

one question
Does it also pass all originating http headers to backend? while doing proxying from any load balancer, some headers usually trimmed off at proxy layer like REMOTE_USER etc. Will these also pass if we used proxy_protocal directive to backends?

Srinivas Kotaru

The mode tcp means haproxy won’t inspect the HTTP headers and will pass the raw TCP packets without changing them.

Yes, that I knew, I am wondering using http mode with send-proxy

Hi Allan,

I read you blog post. We are trying to use similar architecture of HAProxy going to a nginx server where we terminate and send the request to the service on the same VM on a different port. We are running into load testing issue. I asked this question in stack overflow and the description is detailed in http://stackoverflow.com/questions/40231382/non-http-response-code-javax-net-ssl-sslhandshakeexception-non-http-response-me. Do you have any pointers to what could be the problem? If you load tested your application how much load did you test it against?

You don’t have a problem with the load, in this configuration not even a single request will work.

You need to remove the ssl keyword from the haproxy configuration, since you are terminating SSL on nginx and passing the request as-is from the frontend to the backend.

That means backend2 needs to look like this:

backend backend2
mode tcp
balance roundrobin
option ssl-hello-chk
server qa_node server:443 maxconn 200 check

Remove “ssl verify none” and while we are at it remove “check port” as well, its useless if you already specify the same port earlier.

Another major configuration mistake is in the frontends:

frontend http_stateless *:80
frontend https_stateless *:443

Remove the legacy listen configuration from these lines, you are already configuring the port with the “bind” keyword. There should not be any statements after the frontend name.

Thanks Lukas.

I followed your suggestions changed my HAProxy configuration to:

global
  daemon
  pidfile /var/run/haproxy.pid
  maxconn 120000
  nbproc 1
  group haproxy
  user haproxy
  log localhost local0
  log-send-hostname

defaults
   mode http
   option httplog
   stats uri /haproxy-stats
   timeout connect 5000s
   timeout client 3000s
   timeout server 3000s
   balance roundrobin
   default-server inter 5s fall 3 rise 1

frontend fe1
   log global
   bind :::<http_port> v4v6
   maxconn 50000
   option httplog
   option forwardfor
   default_backend backend1

frontend fe2
   log global
   bind :::<https_port> v4v6
   maxconn 50000
   option tcplog
   mode tcp
   default_backend backend2

backend backend1
   mode http
   balance roundrobin
   option forwardfor
   server qa_node <ip>:<http_port> check

backend backend2
   mode tcp
   balance roundrobin
   option ssl-hello-chk
   server qa_node <ip>:<https_port> maxconn 200 check

but even then I get a SSL Handshake exception when I put load on it:

Non HTTP response code: javax.net.ssl.SSLHandshakeException,Non HTTP response message: Remote host closed connection during handshake,receivers 1-38,text,false,,2439,150,150,0,0

The load is generated using jmeter script using 5 different servers at 150tps with 500 requests/sec for a total of 2500 requests/sec. We subjected the nginx VM with the same load and it worked but it fails after 2-3 mins when we put the same load to the HAProxy node which is in front of the (nginx + service) VM. I have also changed maxconn in backend2 to 2600 but have the same exception.