The goal I am trying to achieve is to keep a few SSL connections open from HAProxy on a replica in the EU to our primary server in the US. Due to latency and TLS handshakes doing API calls costs 6-700ms when the connection has to be established for every call. Right now I keep a connection pool using an nginx backend (local nginx proxy) which works well and brings request times down to 200ms which is much more bearable. However we are experiencing random failures between nginx and the AWS ELB in the US. I couldn’t figure out why exactly but some requests fail when under high load.
Hence I thought I would try HAProxy to replace nginx but I hit a few problems:
-
Firstly it seems impossible to do connection pooling on an SNI backend. I wish there was a way to turn this safety off as we ever only connect to one host name on that IP and things can not go wrong at that level.
-
I then tried to drop TLS and just do plain HTTP to even see if it would be more reliable than nginx, but this config below seems to fail to do keep-alive to the backend.
frontend replica_front
bind 127.0.0.1:8080
mode http
default_backend primary
backend primary
balance roundrobin
http-request set-header Host example.com
http-request set-header Connection keep-alive
option httpchk HEAD /login HTTP/1.1\r\nHost:example.com
option http-keep-alive
option srvtcpka
http-reuse always
server node1 internal-*.elb.amazonaws.com:80 check resolvers dns resolve-prefer ipv4
resolvers dns
nameserver a 8.8.8.8:53
nameserver b 8.8.4.4:53
nameserver c 127.0.0.1:53
After doing a few curl requests locally against haproxy, it seems not to have shared/reused any backend sessions between client sessions:
Any help here would be appreciated.