TLS Termination Proxy with both TLS and plain TCP on the same port

We’re considering using HAProxy as a TLS termination proxy, running in front of our TCP server where our clients connect with their front-end apps.

I’m wondering if HAProxy is capabale of making distinction between SSL connection and plain connection on the same port in the frontend section (like binding for example on port 80 both the plain and the ssl sockets), and regardless if it’s an encrypted or non-encrypted connection, it will be proxied on to the backend server: if it’s a plain connection, it’s just tunneled over as plain, while if it’s encrytped, then first decrypt and then tunnel over the plain connection.

In other words, can HAProxy handle TLS handshake request and unencrypted requests on the same port, if it’s not handshake, then just ignore TLS and pass over as it is? It’s important for our clients that they have only 1 single port to be opened in their firewalls to allow communication, and let them configure within their client apps whether they want to use plain or encrypted connections to our servers, and the client app.

I would strongly suggest to do not this, as it will make matters more complicated for you, and if you allow this once, you will have to support this forever and it will be a pain to maintain.

Whats the TCP application? HTTP?

1 Like

It’s not HTTP/S. It’s an in-house developed trading application (using the FIX protocol to communicate with the trading gateway).

Using the TLS connection should be optional, and also for quite while will need to keep backward compatibility with plain TCP socket until all the clients update their apps. Also most clients connect from corporate environments when access to new TCP ports requires modifications in their corporate firewalls, that’s why we’re looking for a solution if it’s possible to accept both TLS and plain connections on the same port, and proxy the traffic to the same backend server which only accepts plain connections.

We wanted to avoid implementing TLS sockets in our server, that’s why we looked at TLS termination proxies as an alternative (time and resource constraints mostly). Stunnell and Hitch were the first candidates, but now we tend to want to go with HAProxy because it also has load-balancing which may come in handy in the future and can be used to anything else including HTTP balancing etc.

If it’s possible what I asked, I was wondering if you could give some examples or any assistance on how to implement plain TCP passthrough and TLS connection on the same port in the HAProxy front-end?

Thanks

If the client speaks first in your FIX protocol, there is a way to technically do this. Or, if you can accept a 2 - 3 second delay when not using TLS, then it also could be done.

Checkout this post to get an idea:

Still, I would strongly suggest you have your customers open new port for the TLS traffic.

This is what I technically wanted for now to run multiple services on the same port, but since I just started playing around with HAProxy, wanted to not immerse more deeply into the documentation until I got confirmation that this is actually possible to implement. I know only one process can be binded to a certain port, so the question really was how can this be implemented in HAProxy to filter based on the content that’s incoming from the client, and do some chain forwarding.

I suppose we can handle and inspect delay of 1-2sec.

Also I have one more question, kind of different topic, I may open a new one if you think it’s more appropriate.

Plan is to use Let’s Encrypt certificates, which expire every 3 months, so they need to be renewed periodically. Can HAProxy handle updating/reloading the certs without restarting the daemon whenever this occurs, without cutting any existing TCP connection? Or at least use the new cert for newly connecting clients through TLS socket, and leave the existing clients using the old cert connected until they disconnect?

Like I said it is indeed possible, and the post linked above is a good starting point to see how it can be done. At this point, immersing more deeply into the documentation would be the next step.

No, but reloading haproxy does not cut existing TCP connections. The old process will keep forwarding existing TCP connections until all of them close or timeout, while the new process will handle everything else, with everything that’s changed from a configuration and certificate point of view.

One more question I have that it’s not mentioned in the manual (at least I didn’t find it yet).

Why do you need a 5s inspection delay for TCP? This will be perceived by the clients as a huge latency when it comes to connecting… Every example that I saw configured with tcp-request inspect-delay 5s was always 5seconds.

Can possibly only a few tens of miliseconds delay be used here? you mentioned in an earlier posts that it can be done if we accept a 2-3second delay when not using SSL, why is this huge delay?

There needs to be delay, because initially the read buffer on the socket will be empty, and when we have data, it may not be what we expect, however when we match something that we do expect, at that point we can match it and the load-balancing decision can be made, waiting for the rest of the delay is not needed.

I’ll explain based on the example in the blog post:

tcp-request inspect-delay 5s
tcp-request content accept  if  HTTP
use_backend ssh             if  { payload(0,7) -m bin 5353482d322e30 }
use_backend main-ssl        if  { req.ssl_hello_type 1 }
default_backend openvpn

Here we expect http, ssh or ssl, and we will default to openvpn if its none of the 3 expected protocol after 5 seconds. So that means, HTTP, SSH and SSL will be matched immediately, because we know what we are looking for. We cannot match openvpn though, so it is the default and an openvpn connection will have a 5 second delay here.

So, for your use-case, this means that TLS is fine; since it can be matched immediately no delay will occur. Whether your FIX protocol will see this delay depends on whether you are able to match it or if you have to use the fallback openvpn uses in the example above, with default_backend.

You may be able to get working results with just 2 or 3 seconds delay.

Thanks for the answer it perfectly makes sense now.

As far as I expecrienced so far even if I put a 2s delay, the connection is instant with only 2 options to go to: either TLS or plain connections. Anyway, I managed to get this working, thanks for all the help that you provided.

I post the solution here if anyone ever wants to set up something similar, please let me know if you have any thoughts/recommendations on how to improve this solution, or if you see any problems with my implementation:

# MAIN FRONTEND LISTENING FOR CLIENT CONNECTIONS
frontend combined
  mode tcp
  log global
  option tcplog
  bind *:9018

  tcp-request inspect-delay 2s
  tcp-request content accept if { req.ssl_hello_type 1 }

  # use the tls loopback backend if SSL handshake
  use_backend tls_loopback if { req.ssl_hello_type 1 }

  # use default backend for everything else
  default_backend plain_loopback

# backend proxying to plain/unencrypted front-end
backend plain_loopback
  mode tcp
  server loopback-for-plain abns@haproxy-plain send-proxy-v2

# backend proxying connection to TLS front-end
backend tls_loopback
  mode tcp
  server loopback-for-tls abns@haproxy-tls send-proxy-v2

# proxy accept loopback - used as TLS termination proxy unencrypting traffic before sending to the main backend
frontend fix_tls
  mode tcp
  log global
  option tcplog
  bind abns@haproxy-tls accept-proxy ssl crt /etc/ssl/tlsproxy/tlsproxy.pem
  default_backend fix-backend

# proxy accept loopback - used as plain proxy to forward unencrypted traffic to the main backend as it is
frontend fix_plain
  mode tcp
  log global
  option tcplog
  bind abns@haproxy-plain accept-proxy
  default_backend fix-backend

# main backend server to route to - only working with unencrypted connections
backend fix-backend
  mode tcp
  log global
  option tcplog
  server quickfix 127.0.0.1:9008 check
1 Like

Looks good, you can also remove the line tcp-request content accept if { req.ssl_hello_type 1 }, I don’t think we need it.

Could I clean up some of the options by putting there somewhere in the ‘defaults’ section?

mode tcp
log global
option tcplog

I’m using these options under all the frontends/backends, so I was wondering if I could get rid of them and list them only once somewhere under ‘global’ or ‘defaults’ and still have them applied everywhere? That would shorten configuration file. Not sure if I obtain the same result though…

Yes, they should be fine if you put them in the default section (just make sure the default section comes prior to the front and backends).

Thank you, you guys have awesome community support! You’ve been a great help, I really appreciate your assistance.

PS: for some reason the SSL handshake timed out when I tried to remove the line with tcp-request content accept if { req.ssl_hello_type 1 } from the proxy config, so had to leave it there in order to work properly.

Have a nice day!

1 Like

It’s very interestng what you have realized.

Could you kindly realize a very simple scheme of what you’re doing? Just for easy of reference

Well I kinda described what I was trying to do in my first post. But when my time permits I will make a drawing of what I implemented for easy reference.

Short: have HAProxy listen on the same interface and port for both plain TCP connections and TLS-encrypted TCP connections, having a backend server that only accepts plain TCP connections (the idea was to avoid implementing TLS socket in the backend server).

What HAProxy will do with incoming connections: if it’s TLS handshake, first decrypt and then proxy to the backend (as it was a plain TCP connection coming in), or either proxy the plain TCP connection directly to the backend. I’m offloading encryption from my app to HAProxy. Using the same port for both encrypted and plain TCP doesn’t require new firewall rules/access granting for a new port client side when clients connect from a corporate environment (maybe unless DPI is used on their side).

This solution works as a generic TLS termination proxy for incoming TCP, but you can also use webservers in the backend, or even enable load balancing in the backend, but this wasn’t needed in this case.