Challenges proxying to RDS gateway

Hi, I am new here and to haproxy. I am hoping to get some pointers here to prevent me spending days or weeks on trial and error. I am happy to read documentation but there seems to be bit of a shortage on my specific issue. so some help to get me in the right direction would be appreciated.

I am configuring this, my first haproxy as a reverse proxy (no load balancing at this point) only. I have a test config so far that works to:

  • in http mode to terminate ssl and then direct the requests to the appropriate backend servers.

  • in tcp mode direct all traffic to my Remote Desktop Services Gateway server (windows server 2016) using ssl passthrough

The reason I am trying ssl passthrough on the RDS gateway is that I can’t find a way to make this gateway server not enforce TLS and as long as I can’t do that, I believe I can’t terminate the SSL at the HAproxy unless I then re-encypt it as it is being sent to the backend (I believe this is called “SSL bridging”).

Everything has to come in on port 80 or 443 to the frontend. That is the purpose of this reverse proxy to not have to give external users different port numbers to use to connect to different websites (we have just one external IP address). but from what I see so far, to do ssl passthough my frontend has to be in tcp mode but all my other sites have to be in http mode because they will be doing ssl termination. Now I can’t have both tcp mode and http mode on the same front end but I think I can split port 443 into 2 front ends by doing something along the lines of https://discourse.haproxy.org/t/two-workloads-on-the-same-port/1879. I haven’t tested that out yet but it looks quite do-able.

My concern is even if I do split the front end in 2 as described in the above link I don’t think I can sort out which traffic has to go to which front end unless I first terminate the ssl in order to inspect the headers so I know which URL they are trying to get to. I would need to send all traffic targetting rds.mydomain.com to go to the tcp mode front end and all other traffic to the http mode front end. So, my questions are:

  1. is it even possible to determine what domain name is being requested withough terminating the ssl? If so how?

  2. If above not possible, is SSL bridging the solution to decrypt, test what domaiin is requested and then re-encypt and direct to the appropriate backend?

1 Like

I’ve experimented a bit. I found the answer to my question 1 is, yes it is possible to determine what domain name is being requested without terminating the ssl. we can do it through req_ssl_sni. It works great

I also found that ssl bridging works, but to make it work I have to add header rewriting code to the front end which I don’t want to do with all my non RDS gateway traffic. That just means that as far as I can see at this point using either method (ssl passthrough or ssl bridge) I have to split the frontend into two. Its now just down to which method is the fastest or most efficient

Clearly using ssl bridging for my RDS traffic is expensive (decrypting and then re-encrytping all of its traffic) and if I can identify the RDS traffic using req_ssl_sni there is no need to use ssl bridging .so I imagine using ssl passthrough for my RDS traffic is the most efficient.

What worries me though is that to do this I would have to do the req_ssl_sni test on all traffic in order to sort it into the 2 front ends (one in http mode for ssl terminating and the other in tcp mode for ssl passthrough), and I have to run “tcp-request inspect-delay 5s” in front of req_ssl_sni. So how much extra time is taken up by waiting for the inspect-delay? I’m sure it is not 5 seconds but that is the number I see being used in most configuration examples. I tried once with that line commented out and my web client did not reach the site so it does seem the inspect delay is required. I also tried changing it to one second rather than 5 and it works in my tests. Does anyone know, in real life is there any measurable cost to having such an inspect-delay for each incoming request?

I would guess my traffic is about 50% or more RDS gateway use and 50% or less everything else. that’s not a measurement, just a guess.

Unless I hear differently I’ll set this up to inspect each packet with req_ssl_sni to separate traffic and then run the RDS traffic straight through with no ssl offloading and send everything else to a front end that terminates ssl

Hi,

  1. see the inspect-delay as “how long HAProxy should wait to collect expected information”, so if the SNI arrives after 1ms, then HAProxy will wait only for 1ms. And for your TLS traffic, SNI should always arrive very fast.

  2. there is no impact on performance because the SNI processing and routing is done only once, at the very first time of the TCP connection. So the statement “inspect each packet” does not apply.

1 Like

Thanks Baptiste. That makes sense. I’ll go ahead as planned then.

I’ve got a working config now that does everything I need (mainly ssl passthrough for RDS gateway and ssl termination for everything else)

global
        maxconn 5000
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-S>        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

frontend web_80
    bind :80
    # Test URI to see if its a letsencrypt request
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    use_backend letsencrypt-backend if letsencrypt-acl
    default_backend be-scheduler-80

frontend web_8443
    # make sure old links pointing to port 8443 keep working
    bind :8443 ssl crt /etc/ssl/$MY_DOMAIN.ca/$MY_DOMAIN.ca.pem
    mode http
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    use_backend letsencrypt-backend if letsencrypt-acl
    use_backend be-epicor-80 if { req.hdr(host) epicor.$MY_DOMAIN.ca:8443 }
    default_backend be-scheduler-80

frontend Sorting_443
    bind *:443
    mode tcp
    option tcplog
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
    use_backend sorted-ts-ssl-passthrough if { req_ssl_sni -i epicor.$MY_DOMAIN.ca }
    default_backend sorted-http-ssl-terminated

backend sorted-ts-ssl-passthrough
    mode tcp
    server loopback-passthrough abns@haproxy-passthrough send-proxy-v2

backend sorted-http-ssl-terminated
    mode tcp
    server loopback-terminated abns@haproxy-terminate send-proxy-v2

frontend ts_passthrough
    bind abns@haproxy-passthrough accept-proxy
    mode tcp
    option tcplog
    default_backend be-col-ts

frontend https-terminated
    bind abns@haproxy-terminate accept-proxy ssl crt /etc/ssl/$MY_DOMAIN.ca/$MY_DOMAIN.ca.pem
    default_backend api-server
    mode http
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    use_backend letsencrypt-backend if letsencrypt-acl
    use_backend be-eroko-8080 if { req.hdr(host) scheduler.$MY_DOMAIN.ca }
    use_backend be-epicor-80 if { req.hdr(host) epicor.$MY_DOMAIN.ca }
    use_backend be-colonial-8008 if { req.hdr(host) portal.$MY_DOMAIN.ca }
    default_backend be-scheduler-80


backend letsencrypt-backend
    server letsencrypt 127.0.0.1:8888

backend be-scheduler-80
    server col-scheduler 10.0.20.57:80 check maxconn 1000

backend be-col-ts
    mode tcp
    option ssl-hello-chk
    server col-ts 10.0.20.56:443

backend be-eroko-8080
    server col-scheduler 10.0.20.57:8080 check maxconn 1000

backend be-colonial-8008
    server col-scheduler 10.0.20.57:8008 check maxconn 1000

backend be-epicor-80
    server col-erp 10.0.20.52:80 check maxconn 1000

still a few things to do before going live with it. For instance I want to add a separate network interface on each of the webservers and the haproxy and make a private network that only connects those computers and has no external network access. That way I shouldn’t have to worry about the unencrypted (ssl terminated) traffic between haproxy and the web servers. Is this standard practice? I kind of think there should be some protection.

1 Like

You can reduce the configuration.
In the Sorting_443 module you are already in passthrough mode. Therefore we write like this:

frontend Sorting_443
    bind *:443
    mode tcp
    option tcplog
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
    use_backend be-col-ts if { req_ssl_sni -i epicor.$MY_DOMAIN.ca }
    default_backend sorted-http-ssl-terminated

The ts_passthrough and sorted-ts-ssl-passthrough modules are removed.