Hi everyone,
I’m trying to understand the precedence of the various timeouts. Esp. I’m trying to configure long lived client connections.
It appears that in case of idle time between requests, the smaller timeout of ‘client’ and ‘http-keep-alive’ takes precedence.
Szenario 1:
timeout client 30s
timeout http-keep-alive 60s
timeout
- client opens tcp connection and performs handshake
- (<1ms) client sends request, haproxy sends response (simple http backend)
- connection idle
- (30s) haproxy closes connection
Szenario 2:
timeout client 90s
timeout http-keep-alive 60s
timeout
- client opens tcp connection and performs handshake
- (<1ms) client sends request, haproxy sends response (simple http backend)
- connection idle
- (60s) haproxy closes connection
Since this haproxy instance is the ingress router in openshift (minishift), the configuration passes requests through a public frontend to a backend that has another frontend on localhost as it’s server which then has the actual backend
fe->be->fe->be->srv
attached is a simplified configuration that I use for testing ourside openshift:
global
maxconn 20000
log 127.0.0.1 local0 debug
tune.maxrewrite 8192
tune.bufsize 131072
ssl-default-bind-options no-sslv3
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20- POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
ca-base /usr/local/etc/haproxy/
crt-base /usr/local/etc/haproxy/
defaults
maxconn 20000
log global
retries 3
timeout connect 5s
timeout client 30s
timeout client-fin 1s
timeout server 30s
timeout server-fin 1s
timeout http-request 10s
timeout http-keep-alive 60s
timeout tunnel 1h
listen stats
bind :9000
mode http
stats enable
stats realm Strictly\ Private
stats uri /stats
stats auth foo:bar
frontend public_ssl
bind :443
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
acl sni req.ssl_sni -m found
use_backend be_sni if sni
default_backend be_no_sni
backend be_sni
server fe_sni 127.0.0.1:10444 weight 1 send-proxy
frontend fe_sni
bind 127.0.0.1:10444 ssl no-sslv3 crt /usr/local/etc/haproxy/default_pub_keys.pem crt-list /usr/local/etc/haproxy/cert_config.map accept-proxy
mode http
option httplog
http-request del-header Proxy
http-request set-header Host %[req.hdr(Host),lower]
use_backend samlpe_service
backend be_no_sni
server fe_no_sni 127.0.0.1:10443 weight 1 send-proxy
frontend fe_no_sni
bind 127.0.0.1:10443 ssl no-sslv3 crt /usr/local/etc/haproxy/default_pub_keys.pem accept-proxy
mode http
http-request del-header Proxy
http-request set-header Host %[req.hdr(Host),lower]
use_backend samlpe_service
backend samlpe_service
mode http
option redispatch
option forwardfor
option httplog
balance leastconn
timeout check 5000ms
http-request set-header X-Forwarded-Host %[req.hdr(host)]
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=%[req.hdr(X-Forwarded-Proto-Version)]
cookie ha-sticky insert indirect nocache httponly secure
server sample_server canary-inner:8080 cookie 1 weight 1
I would expect ‘timeout client’ to actually be concerned with the idle times between req / res and ‘timeout http-keep-alive’ with res / req.
This also happens when there is only one frontend and one backend.
Can anyone confirm or have another explanation?
Thanks a lot!