HTTP/2, varnish, nginx, haproxy, and mixing TCP/HTTP mode

I’ve been using HAProxy for SSL termination as part of a stack that looks like this:

          https           http           http
Internet <-----> haproxy <----> varnish <----> nginx

Everything works great, but adding HTTP/2 support has slammed me hard into a wall and I can’t figure a way out of it. I don’t want to jettison HAProxy in favor of Hitch, but I think I’m about to unless I can figure out some magical voodoo configuration options to get things working

Problem, the short version: The only way I can figure out how to make HTTP/2 work correctly and in conjunction with HSTS (i.e., using haproxy to redirect all client http attempts to https) is to use mode http for the frontend (necessary because I have about 10 sites each using various http-header settings for various things) and mode tcp for the back end )because I have to communicate to Varnish using the proxy protocol, because varnish in turn needs to communicate to nginx with the proxy protocol).

Problem, the short version, continued: If I switch the frontend to mode tcp, I get beautiful HTTP/2-served web sites with no problem. Everything works great. However, http-request redirect scheme https if http obviously no longer works and I don’t know if there are substitutes for that and all the other http-request commands I’m using for domain redirects and the like. I need that functionality and I need it at the termination layer, not deeper in the stack. Conversely, switching the backend to mode http breaks everything and nothing works—no pages get served and all I get is a ERR_SPDY_PROTOCOL_ERROR when I try to connect to anything.

Problem, the long version: ugh, I don’t know if I have the energy to type all this out, but here are some config snippets at each piece in the stack to show what I’m doing:

  1. haproxy:
defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        option  forwardfor

...

frontend my_front
        bind :::80 v4v6
        bind :::443 v4v6 ssl crt mycert.pem ecdhe secp384r1 alpn h2,http/1.1
        acl http ssl_fc,not
        acl letsencryptrequest path_beg -i /.well-known/acme-challenge/
        acl mastodon hdr(host) beg -i mastodon.bigdinosaur.org
        use_backend letsencrypt if letsencryptrequest
        use_backend mastodonwtf if mastodon

        http-request redirect prefix https://www.bigdinosaur.org code 301 if { hdr(host) -i bigdinosaur.org }
  
        ...(imagine lots more 301s here)...

        http-request redirect scheme https if http
        http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload;"
        http-response set-header Referrer-Policy "strict-origin-when-cross-origin"
        http-response set-header X-Content-Type-Options "nosniff"
        http-response set-header X-XSS-Protection "1; mode=block"
        use_backend mastodon if mastodon

        rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if { ssl_fc }
        default_backend tovarnish

...

backend tovarnish
        http-request set-header X-Forwarded-Port %[dst_port]
        http-request add-header X-Forwarded-Proto https if { ssl_fc }
        server local 127.0.0.1:6081 send-proxy-v2

backend mastodon
        http-request set-header X-Forwarded-Port %[dst_port]
        http-request add-header X-Forwarded-Proto https if { ssl_fc }
        http-response set-header Content-Security-Policy redacted for length
        http-response set-header Public-Key-Pins redacted for length
        server local 127.0.0.1:6081 send-proxy-v2

backend letsencrypt
        server letsencrypt 127.0.0.1:54321
  1. Varnish:
# Backend definition. Set this to point to your content server. Nginx listens on 2 ports,
# one for non-upgraded and non-http2 requests (default), and the other for http2 requests.

backend default {
        .host = "127.0.0.1";
        .port = "8086";
        .first_byte_timeout = 600s;
        .between_bytes_timeout = 600s;
        .max_connections = 800;
        .proxy_header = 1;
}

backend h2 {
        .host = "127.0.0.1";
        .port = "8088";
        .first_byte_timeout = 600s;
        .between_bytes_timeout = 600s;
        .max_connections = 800;
        .proxy_header = 1;
}

...

sub vcl_recv {
        # Happens before we check if we have this in cache already.
        #
        # Typically you clean up the request here, removing cookies you don't need,
        # rewriting the request, etc.

        if (req.http.protocol ~ "HTTP/2") {
                set req.backend_hint = h2;
        }
        else {
                set req.backend_hint = default;
        }

}
  1. Nginx:
server {
	server_name www.bigdinosaur.org;
	listen 8088 http2 proxy_protocol default_server;
	listen 8086 proxy_protocol;

...

(all the rest of the vhost file)

I know based on some quick testing with a virtual machine that I can rip out HAProxy and drop in Hitch and everything Just Works :tm:, but I like the additional flexibility HAProxy gives me with being able to do redirects and backend voodoo (for example, if I rip out haproxy, I have to rethink my entire LetsEncrypt setup, ugh).

Does anyone have any insight on potential ways forward here that will let me keep HAProxy? I’ve been banging on this for a couple of days on and off and I just can’t seem to reach a solution that works.

Haproxy, as well as Hitch, does not support HTTP/2. You therefor CANNOT set http headers, if the protocol is HTTP/2.
You have to set all the HSTS magic in Varnish, which actually understands HTTP/2.

There is no reason for you to set any HTTP headers in haproxy anyway.

I suggest:

  • dedicate a frontend to port 80 (mode http, with “http-request redirect scheme https”)
  • put the 443 frontend in mode tcp, and set all headers in the backend
1 Like

Extremely insightful, @lukastribus—I didn’t think far enough through to how the headers would have to be set!

Followup—I’ve got http/2 deployed and working, doing exactly what @lukastribus recommended.

hello @lee_ars,

It seems our setups are very similar. I was curious how you solved the headers that you were setting in haproxy. Based on the “- put the 443 frontend in mode tcp, and set all headers in the backend”, I cannot tell if that means putting it in varnish or the backend definition of haproxy (which doesn’t seem to work in tcpmode).

backend tovarnish
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server local 127.0.0.1:6081 send-proxy-v2

is what I’m looking to solve/set somewhere in the stack when 443 is hit.

Hm after re-reading everything, I see you are forcing everything to HTTPS. I still have a mix of clients that are both http and https, thus I can’t assume everything making it to nginx should be returned as HTTPS build pages.

Right—I’m redirecting everything to https with haproxy, and using varnish to set all headers. (Actually I think I might be setting one site’s headers in nginx below varnish, but that one site is a weird special snowflake.)

Well then, remove the redirect and set everything up for HTTP as well. Whether you set the headers in haproxy or your backend is your choice, but it may make more sense to do it in the backend, since that is what happens for HTTPS as well.

@lukastribus

I guess I’m missing how you “set it on the backend”. Once HAProxy terminates the HTTPS session, my varnish/nginx no longer know if the request came in as HTTP or HTTPS and know if the user should be redirected to HTTPS or if the site is set for service in regular HTTP mode, which the database has the setting for. The above “backend tovarnish” is how I was determing that, but requires HTTP mode.

Did you go with 2 ports for Varnish and based on which which one HAProxy sends the request to? From using Haproxy in TCP mode, this seems to be the only tracker that the services behind it can use to tell if the request is HTTPS or not.

I also have letsencrypt that is URL read by HAProxy in http mode. Did you move this to varnish for URL parsing and proxying on?

Thanks for the answers!

I don’t know you configuration at all. Please open a new thread and provide your full configuration and including informations about your backend server, otherwise this is gonna be guesswork all the way.

Yes, you would use 2 ports or more ports on the backend, you probabily can’t use HTTP/2 and HTTP/1.1 on one port anyway so you would actually need 3 ports.

I suggest you leave letsencrypt on haproxy.

So I got this working and putting the important snippets here to hopefully help out the next.

In my case, this allows port 80 (haproxy) to continue to work, and it all seems to work without needing to add anything extra to nginx. I might not be getting http2 from varnish to nginx, but that is for a future day. I did not need to bind nginx over multiple ports.

Haproxy (1.7.9)

defaults
  default_backend backend-default


frontend frontend-default
  bind *:80
  # for free ssls, https://certbot.eff.org
  acl path_certbot path_beg /.well-known/acme-challenge
  use_backend backend-certbot if path_certbot

frontend frontend-https-default
  mode tcp
  bind *:443 ssl crt /etc/certs.d alpn h2,http/1.1
  # if client can do http2, use different backend
  use_backend backend-http2-default if { ssl_fc_alpn -i h2 }


backend backend-http2-default
  # cannot set headers for http2 at haproxy and must move to varnish, and has to be mode TCP
  #http-request set-header X-Forwarded-Port %[dst_port]
  #http-request add-header X-Forwarded-Proto https if { ssl_fc }
  mode tcp
  server varnish varnish:80 check port 80 send-proxy-v2

backend backend-default
  option http-server-close
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request add-header X-Forwarded-Proto https if { ssl_fc }
  server varnish varnish:80 check port 80 send-proxy-v2

backend backend-certbot
  option http-server-close
  server cerbot certbot:80 maxconn 50

Varnish (5.1.3)

  sub vcl_recv {

  # If we are hit on http2, it has to be on https.
  # Haproxy cannot do http2 so it has to be in tcp mode and
  # cannot add this header on its own, so let varnish do it.
  if (req.proto ~ "HTTP/2") {
    set req.http.X-Forwarded-Proto = "https";
    set req.http.X-Forwarded-Port = "443";
  }

Nginx

server {
  listen      80 default_server;