HAProxy community

SSL handhake failure - pulling my hair out!


So here’s the deal - we have 2 HA proxy instances setup behind a google load balancer. The decryption endpoint is the HA proxy instances. Behind HA proxy there’s 6 web servers.

We have ONE client that is having issues accessing the system, they are getting an SSL handshake failure, and they are using java as a client (I’m verifying the version).

In our logs we see thousands of SSL handshake failures. We’re pretty strict, TLS 1.2 only, HSTS, but our cipher support is fairly broad.

They see this in their logs:
" %% Initialized: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]


But it just hangs and they get a handshake error.

java.io.IOException: javax.net.ssl.SSLException: SSL peer shut down incorrectly

   at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:581)

   at sun.security.ssl.InputRecord.read(InputRecord.java:533)

I have no idea at this point what to do. Any help would be great.


Additionally information that are needed:

  • the output of haproxy -vv
  • the exact TLS configuration in haproxy
  • the capture of the failing TLS session
  • and, indeed, the exact java version along with the OS on the client side


You will have to share more of the configuration, the generic SSL config is not enough, I don’t even know if you use client certificate authentication or not.

Please do share at the very least the bind line, the more the better. Also please put everything in code format (select the text and hit the </> above.)

I’m confused about what you would like to show with this curl command. When I said capture the failing TLS traffic I meant use tcpdump and capture the tcp port 443 traffic from the IP address of your not working client and share the capture file.


Let me see what I can come up with. I don’t have access to the connecting client, so I’m flying a bit in the dark. Thanks for your help!


Update: The client is using Java 1.7.0_191

Working on the rest


This Java version perfectly supports both TLSv1.2 and TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256. No reason for it to fail.

I suggest you try to reproduce this from elsewhere. If you can’t reproduce it, chances are something is interfering locally with the TLS traffic. A TLS intercepting firewall/security device may be also be something to look at at the customer premise.


We have tens of thousands of other clients not having issues, so I’m almost positive this is something to do with their network. The challenge is it didnt happen until we started using HAproxy so of course they are blaming it on us. Thus trying to figure out if it’s something we mis configured. But so far not a single other person has had any issues, including other clients that use Java. UGH. I’ll see if I can dig anything else out, but as always, thank you for your help!


bind *:443 ssl crt /etc/haproxy/ssl/domainname.com.pem crt /etc/haproxy/ssl/letsencrypt/ crt /etc/haproxy/ssl/custom-domains/

log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 120s
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
tune.ssl.default-dh-param 2048
ssl-dh-param-file /etc/haproxy/ssl/dhparam.pem
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
log global
mode http
option httplog
option dontlognull
timeout connect 6000
timeout client 120000
timeout server 120000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
backend my-app
redirect scheme https if !{ ssl_fc }
option forwardfor
balance roundrobin
mode http
option httpchk GET /health_check
http-check expect string success
default-server inter 7s downinter 15s fall 3 rise 2
server web1 my-1:80 check
server web2 my-2:80 check
server web3 my-3:80 check
server web4 my-4:80 check
server web5 my-5:80 check
server web6 my-6:80 check
backend letsencrypt-backend
server letsencrypt localhost:8888
backend maintenance
server mtnc localhost:8081
frontend healthcheck
bind *:8080
monitor-uri /
frontend http
bind *:80
reqadd X-Forwarded-Proto:\ http
default_backend my-app
frontend https
bind *:443 ssl crt /etc/haproxy/ssl/domain.com.pem crt /etc/haproxy/ssl/letsencrypt/ crt /etc/haproxy/ssl/custom-domains/
acl rails_health_check path_beg,url_dec -i /health_check
http-request deny if rails_health_check
reqadd X-Forwarded-Proto:\ https
acl controller-acl ssl_fc_sni -i controller.domain.com
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if controller-acl letsencrypt-acl
default_backend my-app
# default_backend maintenance
http-response set-header Strict-Transport-Security max-age=15768000
http-response set-header P3P ‘CP=“NOI DSP COR NID CURa PUBa NOR STA”’


Right, what was the configuration prior to haproxy then? If you are forcing TLSv1.2 now, and didn’t force TLSv1.2 previously, and a possible TLS interception does not support TLSv1.2, that would perfectly explain the failure.

Like I said, you need to capture the failing TLS handshake in front of haproxy. I’m sure you know the client IP, so capture it with tcpdump:

tcpdump -pns0 -w port443-capture.cap host <ip-address-of-client>

And share the file. Also share the actual domain name that the client connects to (or at least run it through ssltest - specifically looking out for any certificate or compatibility issues). You can send me those things via private message if you prefer.


Roger that. We were forcing TLS 1.2 previously, that’s not new, been like that for about 8 months. I’ll get the dump file. I can’t trigger the failure so let me see what I can do.