Question on haproxy 1.8.3


#1

Hi All,

I have one question related to same with haproxy 1.8.3 version
Can I do ACL at front end or backend check if its http2, send to grpc backend, else forward to nginx??

Configuration

frontend https-in

HTTP/2 - see https://www.haproxy.com/blog/whats-new-haproxy-1-8/

h2 is HTTP2 with TLS - see https://http2.github.io/faq/

Order matters, so h2 before http1.1

mode http
bind *:443 ssl ca-file /f0/base/haproxy/ca.pem  crt /f0/base/haproxy/server.pem alpn h2,http/1.1
#option httpclose
#option forwardfor
use_backend nodes-http2 if { ssl_fc_alpn -i h2 }
default_backend nodes-http

backend nodes-http
mode http
http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
http-request add-header X-Forwarded-Proto https
server node1 0.0.0.0:4443 check send-proxy

#This way browsers which do support HTTP/2 are still able to connect to our website
backend nodes-http2
mode http
# HSTS (15768000 seconds = 6 months)
http-request add-header X-Forwarded-Proto https
http-response set-header Strict-Transport-Security max-age=15768000

# path: /helloworld.Greeter/SayHello
# path: /package.servicename/methodname
#use-server grpcServer if { req.hdr(Content-Type) -i application/grpc }
#use-server grpcServer if { path_beg -i /gnmi }
use-server grpcServer if { url_sub  -i <grpc> }
#use-server grpcServer if { path_sub grpcserver }
server grpcServer localhost:6666 weight 0
# all the rest is forwarded to this server
server default localhost:4443 check

haproxy -f /f0/base/haproxy/haproxy.grpc.cfg -d
Available polling systems :
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 2 (1 usable), will use poll.

Available filters :
[TRACE] trace
[COMP] compression
[SPOE] spoe
Using poll() as the polling mechanism.
00000000:https-in.accept(0003)=0004 from [192.168.1.1:48291] ALPN=h2
00000000:https-in.clireq[0004:ffffffff]: POST /gnmi.gNMI/Capabilities HTTP/1.1
00000000:https-in.clihdr[0004:ffffffff]: password: PA$$WORD
00000000:https-in.clihdr[0004:ffffffff]: routerid:
00000000:https-in.clihdr[0004:ffffffff]: username: TestUser
00000000:https-in.clihdr[0004:ffffffff]: grpc-encoding: identity
00000000:https-in.clihdr[0004:ffffffff]: grpc-accept-encoding: identity,deflate,gzip
00000000:https-in.clihdr[0004:ffffffff]: te: trailers
00000000:https-in.clihdr[0004:ffffffff]: content-type: application/grpc
00000000:https-in.clihdr[0004:ffffffff]: user-agent: grpc-c++/1.1.0-dev grpc-c/2.0.0-dev (linux; chttp2; good)
00000000:https-in.clihdr[0004:ffffffff]: host: localhost
00000000:nodes-http2.srvcls[0004:adfd]
00000000:nodes-http2.clicls[0004:adfd]
00000000:nodes-http2.closed[0004:adfd]
00000001:https-in.accept(0003)=0004 from [192.168.1.1:48293] ALPN=h2
00000001:https-in.clireq[0004:ffffffff]: POST /gnmi.gNMI/Subscribe HTTP/1.1
00000001:https-in.clihdr[0004:ffffffff]: filter: FAN
00000001:https-in.clihdr[0004:ffffffff]: password: PA$$WORD
00000001:https-in.clihdr[0004:ffffffff]: routerid:
00000001:https-in.clihdr[0004:ffffffff]: username: TestUser
00000001:https-in.clihdr[0004:ffffffff]: grpc-encoding: identity
00000001:https-in.clihdr[0004:ffffffff]: grpc-accept-encoding: identity,deflate,gzip
00000001:https-in.clihdr[0004:ffffffff]: te: trailers
00000001:https-in.clihdr[0004:ffffffff]: content-type: application/grpc
00000001:https-in.clihdr[0004:ffffffff]: user-agent: grpc-c++/1.1.0-dev grpc-c/2.0.0-dev (linux; chttp2; good)
00000001:https-in.clihdr[0004:ffffffff]: grpc-timeout: 24H
00000001:https-in.clihdr[0004:ffffffff]: host: localhost
00000001:nodes-http2.srvcls[0004:adfd]
00000001:nodes-http2.clicls[0004:adfd]
00000001:nodes-http2.closed[0004:adfd]

netstat -an | grep -i 6666
tcp 0 0 ::ffff:127.0.0.1:6666 :::* LISTEN
tcp 0 0 ::ffff:127.0.0.1:6666 ::ffff:127.0.0.1:33635 TIME_WAIT

Can you please give me pointers on how to fix this?
Thanks,


#2

Your configuration is way too complicated and confusing. Please avoid using the use-server keyword, this will only be very confusing and you will end up with a mess of a configuration.

You seem to want to route traffic to the 6666 backend only when the protocol is H2 AND the url contains (both conditions need to be true).

So instead of the confusion in your backend with use-server and a server with weight 0:

  • declare a backend for a single purpose
  • do all the routing decisions in your frontend

So:

mode http
 bind *:443 ssl ca-file /f0/base/haproxy/ca.pem  crt /f0/base/haproxy/server.pem alpn h2,http/1.1
 #option httpclose
 #option forwardfor
 use_backend nodes-http2 if { ssl_fc_alpn -i h2 } { url_sub  -i <grpc> }
 default_backend nodes-http
backend nodes-http
 mode http
 http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
 http-request add-header X-Forwarded-Proto https
 server node1 0.0.0.0:4443 check send-proxy

#This way browsers which do support HTTP/2 are still able to connect to our website
backend nodes-http2
 mode http
 # HSTS (15768000 seconds = 6 months)
 http-request add-header X-Forwarded-Proto https
 http-response set-header Strict-Transport-Security max-age=15768000
 server grpcServer localhost:6666

This will also make your logs and debugs readable. In your current configuration you will have a hard time troubleshooting this.


#3

Grazie Luka.
But I had to remove “mode http” from nodes-http2:
So as below: this made it work
frontend https-in

HTTP/2 - see https://www.haproxy.com/blog/whats-new-haproxy-1-8/

h2 is HTTP2 with TLS - see https://http2.github.io/faq/

Order matters, so h2 before http1.1

log /tmp/haproxy.log local0 debug
bind *:443 ssl ca-file /f0/base/haproxy/ca.pem crt /f0/base/haproxy/server.pem alpn h2,http/1.1
use_backend nodes-http2 if { ssl_fc_alpn -i h2 } { url_sub -i }
default_backend nodes-http

backend nodes-http
mode http
#http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
http-request add-header X-Forwarded-Proto https
server node1 0.0.0.0:4443

#This way browsers which do support HTTP/2 are still able to connect to our website
backend nodes-http2
server grpcServer localhost:6666

Thanks again.