Port forward all http(s) to haproxy for SNI with LE/nginx, and restrict some TCP/mysql access per IP

I have an OpenWRT firewall which is configured to send all 80/443 to HAproxy.

I use pihole for local DNS/DHCP

I then use an HAproxy LXC to route the requests to other VM/LXC’s in my LAN (proxmox VE + a few pi’s).

I also want to restrict mysql access to specific whitelisted IP’s which are in a file (/etc/haproxy/whitelist.IPs) for specific CLI tasks. They connect to <public_IP>:33061 where HAproxy has a listener and routes to mysql:3306 if the srcIP is in the whitelist (the whitelist gets a DynDNS update using dig as needed)

At the moment I use LE wildcard certs and nginx(SSL) for the https but that means I have multiple places to update certificates and configs, and the wildcard LE via DNS is messy, so what I’d like to do is switch it to HAproxy issued certs which also get updated as needed by the HAproxy machine.
I understand that only need to have a specific LE port change and a PEM concat for HAproxy, but what I am not getting right is the syntax for the FE/BE within haproxy.cfg

The one I can’t get right is the jellyfin one, so I did a workaround on port forwards and those requests go to a non-80/443 port which bypass HAproxy and gets sent straight to the JF:8096 machine’s nginx reverse proxy - I’d like to have pretty much all traffic going via HAproxy.

This sometime works and sometimes does’t even though the configs look the same to me (which also goes to show that I don’t really know what I am doing :blush: )
My config looks like below… and does (mostly) work - but feels like it’s horribly ugly/inefficient.
{sorry if it’s weirdly spaced because of the IDE I used}

What should I add/del/change to make it more robust and closer to better/best practices?
My aim…

  • HAproxy manages all certs (auto updates as well as new and with A+ ssl ratings if possible)
  • I’d like to be able to see/detect client IP’s at the nginx/httpd point
  • nginx only needs to be set for the basic http:80 since the rest is done higher up
  • fix the mangle for jellyfin so that it can come in via 80/443 and get to the JF-reverse-proxy correctly…and show the client IP in there too
  • when I add in check for some of the BE’s they fail in the stats - I don’t understand where/why (I did try putting in hosts entries on the HAproxy machine, and also in the pihole-DNS for those but it didn’t help).

Thanks in advance. I don;t take offense so if I need to get flamed, that’s OK

global
  log       /dev/log  local0
  log       /dev/log  local1 notice
  # https://www.haproxy.com/blog/introduction-to-haproxy-logging/
  chroot    /var/lib/haproxy
  stats     socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
  stats     timeout 30s

  user      haproxy
  group     haproxy
  daemon
  # Default SSL material locations
  ca-base   /etc/ssl/certs
  crt-base  /etc/ssl/private

  # See: https://ssl-config.mozilla.org/#server=haproxy&version=2.6&config=modern&openssl=1.1.1n&guideline=5.6
  # modern configuration
  ssl-default-bind-ciphersuites   TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
  ssl-default-bind-options        prefer-client-ciphers no-sslv3 no-tlsv10 no-tlsv11 no-tlsv12 no-tls-tickets
  ssl-default-server-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
  ssl-default-server-options      no-sslv3 no-tlsv10 no-tlsv11 no-tlsv12 no-tls-tickets
  ssl-default-server-ciphers      ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
  tune.ssl.default-dh-param 2048

defaults
  log       global
  option    httplog
  option    dontlognull
  # option    forwardfor       except 127.0.0.0/8
  option    redispatch
  option    http-server-close
  retries   3
  timeout   http-request    10s
  timeout   queue           1m
  timeout   connect         10s
  timeout   client          1m
  timeout   server          1m
  timeout   http-keep-alive 10s
  # timeout   check           10s
  # maxconn   3000
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http

frontend stats
  bind                *:9000
  mode                http
  stats               enable
  stats               uri /stats
  stats               refresh 30s
  stats               auth admin:password
  stats               hide-version
  stats               realm HAproxy\ Statistics

frontend http_in
  bind                *:80 alpn h2,h2c,http/1.1
  mode                http
  option              forwardfor
  http-request        redirect scheme https unless { ssl_fc }
  # All of the rules applying on port 80 (or any port) need to be specified on a single frontend (or a single listen) that is bound to port 80. [https://serverfault.com/questions/794943/haproxy-multiple-frontends-same-bind]

frontend https_in
  bind                *:443
  mode                tcp
  option              tcplog
  tcp-request         inspect-delay 5s

  acl tls             req_ssl_hello_type 1
  tcp-request         content accept if tls
  # tcp-request         content accept if { req_ssl_hello_type 1 }      ### this line does the same as the 2 above it

  option forwardfor   header X-Real-IP
  http-request        set-header X-Real-IP %[src]
  HSTS (63072000 seconds)
  http-response set-header Strict-Transport-Security max-age=63072000

  acl acl_jellyfin     req_ssl_sni   -i jellyfin.example.org
  acl acl_nextcloud    req_ssl_sni   -i nextcloud.example.org
  acl acl_httpd        req_ssl_sni   -i httpd.example.org
  acl acl_serene       req_ssl_sni   -i serene.example.org
  acl acl_serene       req_ssl_sni   -i serene.example.net

  use_backend         nextcloud  if acl_nextcloud
  use_backend         httpd      if acl_httpd
  use_backend         jellyfin   if acl_jellyfin
  use_backend         serene     if acl_serene
  default_backend     default

backend httpd
  mode        tcp
  #option      httplog
  option      tcp-check
  option      ssl-hello-chk
  option      httpchk GET /
  http-check  send hdr Host httpd.example.org
  server      httpd         httpd.example.org:443 check-ssl verify none send-proxy-v2

backend nextcloud
  mode        tcp
  # http-check expect status 200  #* when I specify the code 200, HAproxy reports "no backend server available" - seems that it's better to let it work out the code itself
  option      tcp-check
  option      ssl-hello-chk
  option      httpchk GET /
  http-check  send hdr  Host nextcloud.example.org
  server      nextcloud      nextcloud.example.org:443 check-ssl verify none check-sni nextcloud.example.org sni str(nextcloud.example.org) # send-proxy-v2

backend jellyfin
  mode        tcp
  # http-check  send hdr Host jellyfin.example.org
  option      tcp-check
  option      ssl-hello-chk
  option      httpchk GET /
  server      jellyfin      jellyfin.example.org:443 check-ssl verify none send-proxy-v2

backend serene
  mode        tcp
  option      tcp-check
  option      ssl-hello-chk
  option      httpchk GET /
  http-check  send hdr Host serene.example.org
  server      serene        serene.example.org:443 check-ssl verify none send-proxy-v2

backend default
  mode        tcp
  option      tcp-check
  option      ssl-hello-chk
  option      httpchk GET /
  http-check  send hdr Host wotd.example.org
  server      nginx         wotd.example.org:443 check-ssl verify none send-proxy-v2

listen mysql
  bind        <public_IP>:33061
  mode        tcp
  acl         mysql_ip_OK src -f /etc/haproxy/whitelist.IPs
  tcp-request connection reject if !mysql_ip_OK
  balance     roundrobin
    # I am using 127.0.0.1/localhost as the "list" of servers for the sake of example - real world would be the IP list of actual server IPs
    server    mysql1 127.0.0.1:3306
    server    mysql2 localhost:3306

I’ll offer some recommendations. They may not be “best practice”, but they’re what I do for my sites.

To accomplish this, I would switch almost all of your configs to mode http instead of tcp so that HAProxy can do all the TLS negotiation. That will require almost a full rewrite of what you have, and if you don’t want SSL certificates on your backends too, you’ll have to reconfigure all of them to use HTTP vs HTTPS.

Nginx and most other web servers understand the X-Forwarded-For header. To keep it secure (assuming there are no proxies in front of HAProxy), I remove the header on requests and then tell HAProxy to add the header back.

	http-request del-header x-forwarded-for
	option forwardfor

This is totally doable in this mode, however I don’t have all the details about how you would configure Jellyfin.

So, I would assume jellyfin.example.org would point at HAProxy, and backends would be configured with an IP address instead of a hostname. Many example configs I’ve seen with hostnames in the backends that may change also have resolver sections configured. I’m in a homelab environment, so this is not a concern for me, but if you’re using hostnames for the backends, they probably shouldn’t be the same hostnames that clients use. For example, if you use jellyfin.example.org to access Jellyfin, that should point at HAProxy. In the backend, you might use the IP address or something like jellyfin.something.else and configure a resolver that will look up *.something.else.

It looks like you’ve already accomplished this. Not sure what you’re looking for here. Keep in mind that those IP’s are loaded whenever HAProxy initially loads. Changes to the file require a reload of HAProxy to take effect.

As if this response wasn’t long enough, here’s an example of how I would adjust your configuration. This is not meant to be a total solution, but hopefully it will get you in the direction you’re looking for.

defaults
	log	 global
	mode	http
	option	httplog
	option	dontlognull
	# option	forwardfor	 except 127.0.0.0/8
	option	redispatch
	option	http-server-close
	retries	 3
	timeout	 http-request	10s
	timeout	 queue	 1m
	timeout	 connect	 10s
	timeout	 client	1m
	timeout	 server	1m
	timeout	 http-keep-alive 10s
	# timeout	 check	 10s
	# maxconn	 3000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

frontend stats
	bind	*:9000
	stats	enable
	stats	uri /stats
	stats	refresh 30s
	stats	auth admin:password
	stats	hide-version
	stats	realm HAproxy\ Statistics

frontend inbound
	bind	*:80 name "Non-Secure Port 80" alpn h2,h2c,http/1.1
	# Note: Certificate path or file must also contain the private key. 
	# http://docs.haproxy.org/2.6/configuration.html#5.1-crt
	bind	*:443 name "Secure Port 443" ssl crt /path/to/my/fullchain.pem alpn h2,h2c,http/1.1
	option forwardfor
	# HSTS (63072000 seconds)
	http-response set-header Strict-Transport-Security max-age=63072000

	acl acl_jellyfin hdr(host) -i jellyfin.example.org
	acl acl_nextcloud hdr(host) -i nextcloud.example.org
	acl acl_httpd hdr(host) -i httpd.example.org
	acl acl_serene hdr(host) -i serene.example.org
	acl acl_serene hdr(host) -i serene.example.net

	use_backend nextcloud if acl_nextcloud
	use_backend httpd if acl_httpd
	use_backend jellyfin if acl_jellyfin
	use_backend serene if acl_serene
	default_backend default

backend httpd
	option	httpchk GET /
	# << PICK ONE>>
	# server httpd httpd.example.org:443 check ssl verify none send-proxy-v2
	# server httpd 1.2.3.4:443 check ssl verify none send-proxy-v2
	# server httpd httpd.example.org:80 check send-proxy-v2
	# server httpd 1.2.3.4:80 check send-proxy-v2

# This is what my Jellyfin backend looks like, more or less
backend jellyfin
	http-request set-header X-Forwarded-Port %[dst_port]
	timeout connect 30s
	option httpchk GET /health
	server jellyfinhost 10.0.0.1:443 check maxconn 1000
	# These allow you to visit jellyfin.example.com/statistics and see stats for just this backend and related frontend
	stats enable
	stats hide-version
	stats refresh 15s
	stats uri	/statistics
	stats scope .
	stats scope entrypoint

### Treat the rest of your backends a similar way.

listen mysql
	bind	*:33061
	mode	tcp
	option redispatch
	option tcpka
	option tcplog
	retries 3
	# To prevent flooding your logs with normal DB stuff. It can get crazy if your apps access the db via haproxy
	#option dontlog-normal 
	acl	 mysql_ip_OK src -f /etc/haproxy/whitelist.IPs
	tcp-request connection reject unless mysql_ip_OK
	# MySQL connections tend to stick once established, so you want the server with the least connections.
	balance	 leastconn
	# I am using 127.0.0.1/localhost as the "list" of servers for the sake of example - real world would be the IP list of actual server IPs
	server	mysql1 127.0.0.1:3306
	server	mysql2 localhost:3306

Edit: Grammar

Thank you for the pointers - I will try out the changes and see how they look/work.

The main question that comes to mind is certificates…

bind	*:443 name "Secure Port 443" ssl crt /path/to/my/fullchain.pem alpn h2,h2c,http/1.1

implies that they ALL get assigned a single certificate. If I want each LXC to get assigned a relevant/different certificate, how do I configure the frontend, or do I use conditional binds something like

bind	*:443 name "Secure Port 443" ssl crt /path/to/NEXTCLOUD/fullchain.pem alpn h2,h2c,http/1.1 if acl_NEXTCLOUD
bind	*:443 name "Secure Port 443" ssl crt /path/to/JELLYFIN/fullchain.pem alpn h2,h2c,http/1.1 if acl_JELLYFIN

ie…can the binds also use an if?

The mysql side - yes it works, but I wasn’t sure if it’s safe/good to do it this way. I also didn’t realise that it would fail if the IPs change, so I will make a script change so that IF these change it updates the whitelist file AND restart HAProxy (Thank you!)

Well, not really, but you can host multiple certs on the same bind. If I recall correctly, HAProxy is smart enough to use the certificate that matches the domain requested. That bind can also be a path, or you can have multiple crt statements. You can do

    bind *:443 name "Secure Port 443" ssl crt /path/to/all/my/certs/ alpn h2,h2c,http/1.1

or you could do

    bind *:443 name "Secure Port 443" ssl crt /path/to/nextcloud/fullchain.pem crt /path/to/jellyfin/fullchain.crt alpn h2,h2c,http/1.1

Either of these should work just fine.

Awesome - I will try it out. Thank you again for all the info

Sorry - 1 more clarification on the certificates…
If I use the path method (the list may get very long if I specify each one in the bind:443 line)

bind *:443 name "Secure Port 443" ssl crt /path/to/all/my/certs/ alpn h2,h2c,http/1.1

do I simply have all the cert files in that subdir

nextcloud.domain.com.fullchain.pem
jellyfin.domain.com.fullchain.pem
serene.example.net.fullchain.pem

or is there a better way?

I found this post which clarifies for me i.t.o the subdirs.

Is it better and/or more stable to use crt-list as indicated in this post, or does it end up being pretty much the same either way?

Thanks

LTS versions of HAProxy tend to be very stable even with large configurations. I’ve never tried that way of listing certs (I use a LE wildcard for mine) but would imagine it’s the same performance and reliability no matter which way you choose to go.

1 Like

I think I understand now. One last question regarding the destination nginx/apache2 configs… those just get setup for http:80 with no LE/SSL options approximately like so…?

APACHE

<VirtualHost *:80>
  ServerAdmin webmaster@domain.com
  ServerName domain.com
  ServerAlias www.domain.com

  DocumentRoot /var/www/html/domain.com

  RemoteIPProxyProtocol On
  RemoteIPHeader X-Forwarded-For
  RemoteIPTrustedProxy 192.168.1.1

  Loglevel error
  ErrorLog ${APACHE_LOG_DIR}/domain.com-error.log
  CustomLog ${APACHE_LOG_DIR}/domain.com-access.log combined

  <Directory /var/www/html/domain.com>
    #Options Indexes FollowSymLinks
    AllowOverride None
    Require all granted
  </Directory>

  #Added for multiPHP
  <FilesMatch \.php$>
    SetHandler "proxy:unix:/run/php/php7.4-fpm.sock|fcgi://localhost/"
  </FilesMatch>
</VirtualHost>

NGINX

server {
  listen 80 proxy_protocol;
  server_name domain.com www.domain.com;
  index index.php index.html;

  # access_log /dev/stdout realip;
  access_log /var/log/nginx/domain.com-access.log;
  error_log  /var/log/nginx/domain.com-error.log error;

  location / {
    try_files $uri $uri/ =404;
  }
  location ~ \.php$ {
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
  }
}

Yes, and that is how I would do it, but you should choose based on the security required for your environment.

You can do with or without SSL. Some more advanced configurations may put a custom certificate on a backend and have HAProxy validate it against a specific certificate. For instance, environments leaning towards zero-trust will not have unencrypted traffic anywhere and might have single-use, internally signed certificates on each backend. (Don’t quote me. I’ve never set up zero-trust envornments.)

Simpler configurations (like mine) just drop HTTPS at HAProxy (commonly called SSL Offloading) and configure port 80 behind it. There are a handful of apps that don’t allow HTTP (example: Proxmox) and use self-signed configs.
To just connect on port 80 with no SSL:

	server http 1.2.3.4:80 check send-proxy-v2

To connect with SSL (must be valid certificate):

	server https 1.2.3.4:443 check ssl send-proxy-v2

To connect with SSL and not validate a certificate (self-signed):

	server https 1.2.3.4:443 check ssl verify none send-proxy-v2

To connect with SSL and validate with a specific certificate (self-signed but validated):

	server https 1.2.3.4:443 check ssl crt /path/to/self-signed/cert.pem send-proxy-v2

Note: To the best of my knowledge, using multiple certificates is reserved for the frontend bind command and cannot be done on a server, however if you have multiple servers on a backend, each one can use a different certificate or not have a certificate, and they can be mixed.

backend weird_http_https_mix
	balance roundrobin
	server server1_nossl 1.2.3.4:80check
	server server2_ssl_verified 1.2.3.5:443 check ssl send-proxy-v2
	server server3_ssl_unverified 1.2.3.6:443 check ssl verify none send-proxy-v2
	server server4_ssl_specific 1.2.3.7:443 check ssl crt /path/to/this/cert.pem send-proxy-v2

I wouldn’t recommend such a strange backend, but it’s technically possible.