till
June 1, 2020, 1:49pm
1
Hi,
I think/hope I am trying to do something relatively simple:
I have one HAProxy (2.1) running on 127.0.0.1:8181
I have a service which speaks http2 (with SSL), running on 127.0.0.1:9001
My goal is to route traffic via the HAProxy to my service/backend. If this was HTTP 1.1, I would call it SSL passthrough. The service itself, sets up certs, etc… It’s a third party agent written in Golang. There’s no Let’s Encrypt or anything. The certificates are self-signed, hence -k
in my curl
examples below.
Here is my HAProxy configuration:
global
daemon
maxconn 256
log-send-hostname
defaults
mode tcp
option http-use-htx
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend h2-in
bind *:8181
mode tcp
default_backend servers
backend servers
server agent 127.0.0.1:9001 check
(Stats report the backend to be available.)
When I access the service via HAProxy, I get the following error:
❯ curl -k -v --tlsv1.2 https://127.0.0.1:8181/ping
* Trying 127.0.0.1:8181...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8181 (#0)
* ALPN, offering http/1.1
* WARNING: disabling hostname validation also disables SNI.
* Server aborted the SSL handshake
* Closing connection 0
curl: (35) Server aborted the SSL handshake
When I access the service directly via curl, it responds (204):
❯ curl -k -v --tlsv1.2 https://127.0.0.1:9001/ping
* Trying 127.0.0.1:9001...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 9001 (#0)
* ALPN, offering http/1.1
* WARNING: disabling hostname validation also disables SNI.
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: X509 Certificate
> GET /ping HTTP/1.1
> Host: 127.0.0.1:9001
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 204 No Content
< Date: Mon, 01 Jun 2020 13:41:52 GMT
<
* Connection #0 to host 127.0.0.1 left intact
Can anyone take a look at my configuration and tell me what I am doing wrong?
till
June 1, 2020, 1:54pm
2
Figured I would add a curl
without forcing tls v1.2:
❯ curl -k -v https://127.0.0.1:9001/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 9001 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: [NONE]
* start date: Jun 1 13:23:44 2020 GMT
* expire date: Jun 1 13:23:44 2021 GMT
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fd65d009c00)
> GET /ping HTTP/2
> Host: 127.0.0.1:9001
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 204
< date: Mon, 01 Jun 2020 13:51:56 GMT
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection 0
till
June 4, 2020, 2:07pm
3
Bumping this up, is this not possible? Can anyone help/provide pointers?
The backend line should be:
server agent 127.0.0.1:9001 check ssl verify none
@void_in no, the OP wants SSL passthrough and the client already comes in via HTTPS.
I don’t see anything wrong with the configuration. I suggest you check backend logs and/or capture healthy and broken SSL handshakes.
till
June 4, 2020, 7:17pm
6
@void_in Thanks for helping!
@lukastribus Any suggestions how to do that?
I’ve changed the configuration a little:
global
daemon
maxconn 256
log-send-hostname
log stdout format raw local0 debug
defaults
mode tcp
option tcplog
log global
option log-health-checks
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend stats
bind *:8282
mode http
option httplog
maxconn 10
timeout client 100s
stats enable
stats refresh 30s
stats show-node
stats uri /stats
stats admin if LOCALHOST
frontend tcp_in
bind *:8181
mode tcp
option tcplog
log global
default_backend servers
backend servers
mode tcp
#log global
option tcp-check
tcp-check connect ssl
server agent host.docker.internal:9001 check verify none
I am using the haproxy:2.1
image off of Docker Hub, added the option tcp-check
, and the frontend stats
to confirm the backend is alive. Also noticed how I can force http/1.1 on the service, so this seems less about h2.
This is what I am using:
HAProxy version 2.1.5-36e14bd, released 2020/05/29
When I do a request via loadbalancer, I can’t get it to log anything. It’s really weird, this is an output of the log, nothing else:
[NOTICE] 155/191032 (1) : New worker #1 (7) forked
Proxy stats started.
Proxy tcp_in started.
Proxy servers started.
[WARNING] 155/191032 (7) : Health check for server servers/agent succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 9ms, status: 3/3 UP.
Health check for server servers/agent succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 9ms, status: 3/3 UP.
172.17.0.1:56118 [04/Jun/2020:19:10:40.557] stats stats/<STATS> 0/0/0/0/0 200 14783 - - LR-- 1/1/0/0/0 0/0 "GET /stats HTTP/1.1"
172.17.0.1:56118 [04/Jun/2020:19:10:40.618] stats stats/<NOSRV> -1/-1/-1/-1/0 503 237 - - SC-- 1/1/0/0/0 0/0 "GET /favicon.ico HTTP/1.1"
172.17.0.1:56230 [04/Jun/2020:19:11:10.632] stats stats/<STATS> 0/0/0/0/1 200 14871 - - LR-- 1/1/0/0/0 0/0 "GET /stats HTTP/1.1"
172.17.0.1:56230 [04/Jun/2020:19:11:10.698] stats stats/<NOSRV> -1/-1/-1/-1/1 503 237 - - SC-- 1/1/0/0/0 0/0 "GET /favicon.ico HTTP/1.1"
till
June 4, 2020, 7:25pm
7
OK, I feel incredibly dumb. And thanks for responding, it got me to look at it again. =(
Everything is working! I mapped the Docker port wrong.
1 Like