SSL Pass-Through Process Flow?

So recently I built new Haproxy servers to replace ones on EOL versions of Ubuntu. I copied over the original config file and modifies it to handle SNI one one frontend. I’m very confident that these servers are operating in an SSL pass-through mode, but there are questions about the config mentioning the ssl cert files in both the front and backends.

Sanitized config here: dpaste/JVPm (Plain Text)

So by specifying the two .pem files in the frontend binding, means that upon matching SNI, whichever relevant SSL cert will be sent in a response right?

And having the .pem’s in the service directives on the backends means that Haproxy can establish an encrypted session with that server, right? But if this is the case, then how can that work when the service line contains the short DNS name and the IP when/if neither of those are in the certificate?

Or maybe that is not how it works, I don;’ have a firm understanding of what SSL-Passthrough looks like in this case.

And then to confound the issue, ‘verify none’ on the backend… I don’t understand how client certificates come into play here.

For a co-worker who looks at this, he says it appears to him that the Haproxy server is terminating the initial TLS session and forming another to the backend, I don’t feel thats how it works but I cannot explain otherwise. I really hope someone can help me with this.

Thank you!

EDIT: Might be worthwhile to note, this/these servers are in the DMZ and the firewall is only allowing external access in to TCP/636 and TCP/443.

If you see the ssl keyword (which also implies a certificate is configured) on the bind line, then you are terminating SSL here. When you are passing through SSL, then you don’t specify an SSL certificate.

If you see the ssl keyword on the server line, it means you get plaintext traffic from the frontend, and you are starting a new (client) SSL session towards the backend server. verify none so that the backend servers SSL certificate is not verified.

If you see a certificate configured on the backend server line, it means haproxy uses it for client certificate authentication against the backend server.

The tcp-request rules in the frontend are needed for SSL passthrough, they are not needed in this configuration and should be removed.

1 Like

Thank you lukastribus!
If I can ask you further… Let me see if I understand correctly.

So if our goal was to have SSL-Passthrough only, but also verify the back end server certificate. I should:
Remove everything after the port number on the bind lines
Remove SSL from the Server directives
Change verify none to verify required on the server directives
Ensure that my ca-file is just whats needed to validate the servers SSL certificate

If this is correct, then in my current configuration, how are things working? It seems that I am terminating the initial SSL session on my load balancer. Since I used the wrong file for the ca-file (it’s actually the same cert as in the bind lines) are the connections from Haproxy to the servers unencrypted?

Not technically possible. SSL-passthrough implies that you do not verify the backend server certificate, that doesn’t make sense

SSL passthrough means connecting a TCP socket on the frontend with a TCP socket on the backend, that’s it.

The only thing you can do is make health-checks with SSL verification, and fail the backend server when the verification fails.

You are not “initially” terminating the SSL session, there is no such thing.

You are not using SSL passthrough, and you are fully terminating SSL on both the frontend and the backend.

verify none means SSL certificate verification is disabled, so it doesn’t matter what you put into the ca-file. That doesn’t mean the traffic is unencrypted, the traffic is still encrypted, but the certificate of your backend server is not verified.

Thank you again! (Also apologies if my questions are rudimentary, I’m new to proxying)
Ok so in pass-through there is no intention over verifying a certificate. Haproxy is just like opening the door and letting the connection through with eyes closed then it seems. VERY good to know!
So when you say health-checks with SSL verification, is Haproxy actually verifying the Server certificate at that point? If so that sounds like it’d be useful in my situation.
Sorry for the word ‘initially’, I guess I just meant to say: The first request is terminated to the Haproxy server, then another TLS request is set to the backend. But since I have ‘verify none’ then I was not verifying the servers ssl cert.
So you are saying we are both terminating on both the front and back ends, is that… I guess, a normal thing to do? Are there any negatives to doing is that way?

Thank you Lukastribus!!

Correct. The browser (or whatever the SSL client is) remains the SSL client and you backend server remeains the SSL server, without local encryption. This is end-to-end encryption, and haproxy would never be able to access the plaintext.

In this case/proposal, haproxy would make health checks with SSL verification enabled, and when it fails, it would mark the server down, so that connections would no longer be established to that server (that’s what normal load-balancing with health checks does).

Connections from clients would still be end-to-end, and haproxy would still not verify the backend server certificate for every connection that is passed through (only health-checks, which impacts new connections indirectly, because haproxy would not send traffic to backend servers that are declared down).

Encrypting/decrypting SSL on the proxy is a common enough configuration; but I don’t know what your requirements are and what you actually want to achieve, so I can’t give you any indications about what is “correct” or “incorrect”.

If you are terminating SSL on haproxy, you need to trust haproxy, because it means unencrypted traffic is handled by haproxy internally (and in memory). But (when we are talking about HTTP for example) you will be able make load-balancing decisions based on HTTP headers, use cookies for persistence, etc.

If you want the proxy to only handle encrypted traffic, maintaining end-to-end encryption between the client and the server, then you need to pass it through without local SSL termination. You won’t be able to access or modify application data, so cookie persistence will not work.

You are amazing, I’ve learned more with you than the previous 2 months Googling things.

So in our situation, we had a set of older Ubuntu 16.04 servers hosting Haproxy for external LDAPS lookups. This sounds like a weird thing to do, but the networking team had a very detailed allowlist to the DMZ for these servers on ports 389 and 636. The Haproxy config on these servers was terminating TLS at the proxy, then TLSing to the backend server(s) but not attempting to verify. They also had no SNI or anything as this was just one 1 domain.

With the new servers I built, we went with only port 636 and I introduced SNI because we wanted to add a second domain. My boss, who has way more experience with Load Balancers and Proxies than I do, wants the encryption at both the front and back ends as he deems this more secure and performance is not an issue. And being able to very the ‘server’ certificates is important too.

So, after a few months of Googling, and trying to ask questions here, Reddit and a couple other places I ended up with the config I linked to in my original post. Which, not one person ever commented on the obvious problems I had with my config file, like the ca-file but ‘verify none’ being there.

So, I’m super lucky to have encountered you, you’ve been super helpful! And I’ve more questions for you if that is ok…

So I’m wanting to terminate SSL at the proxy, then also at the back end server. if I set ‘verify required’ then I must have the correct ‘ca-file’. Since the backend connection is going to an internal server without a 3rd party SSL certificate, does this mean my ca-file should be the appropriate chain from my internal CA?

Also, the server directive’s that I’ve seen in all my googling always had the ip address:port. Which obviously won’t work with an SSL certificate, will Haproxy work with FQDN values being there instead?

You may want to double check if all your LDAP clients actually support SNI. All browsers do it, sure, but LDAP clients may not.

Yes, you must use an internal CA to sign those backend server certificates and the CA’s certificate needs to be used with ca-file.

You can set SNI then it will be used for validation, or alternatively you can use verifyhost.

Right before I saw this reply… I made changes to the dev proxy. I created new .pem files for ca-file containing the domains CA subordinate and root certs. Then the third one the .bundle from Namecheap because it’s got the 3rd party cert in the Computers personal cert store.
After reloading the daemon and tailing the haproxy.log, the health checks are successful!

I think I’m in business now,! Dude seriously thank you so much for the help!

1 Like