Timeout precedence in 1.8

Hi everyone,
I’m trying to understand the precedence of the various timeouts. Esp. I’m trying to configure long lived client connections.

It appears that in case of idle time between requests, the smaller timeout of ‘client’ and ‘http-keep-alive’ takes precedence.

Szenario 1:

timeout client 30s
timeout http-keep-alive 60s
timeout

  • client opens tcp connection and performs handshake
  • (<1ms) client sends request, haproxy sends response (simple http backend)
  • connection idle
  • (30s) haproxy closes connection

Szenario 2:

timeout client 90s
timeout http-keep-alive 60s
timeout

  • client opens tcp connection and performs handshake
  • (<1ms) client sends request, haproxy sends response (simple http backend)
  • connection idle
  • (60s) haproxy closes connection

Since this haproxy instance is the ingress router in openshift (minishift), the configuration passes requests through a public frontend to a backend that has another frontend on localhost as it’s server which then has the actual backend

fe->be->fe->be->srv

attached is a simplified configuration that I use for testing ourside openshift:

global
  maxconn 20000
  log 127.0.0.1 local0 debug
  tune.maxrewrite 8192
  tune.bufsize 131072

  ssl-default-bind-options no-sslv3
  tune.ssl.default-dh-param 2048
  ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20- POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

  ca-base /usr/local/etc/haproxy/
  crt-base /usr/local/etc/haproxy/

defaults
  maxconn 20000
  log global
  retries 3
  
  timeout connect 5s
  timeout client 30s
  timeout client-fin 1s
  timeout server 30s
  timeout server-fin 1s
  timeout http-request 10s
  timeout http-keep-alive 60s
  timeout tunnel 1h

listen stats
  bind :9000
  mode http
  stats enable
  stats realm Strictly\ Private
  stats uri /stats
  stats auth foo:bar

frontend public_ssl
  bind :443
  option tcplog

  tcp-request  inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  acl sni req.ssl_sni -m found
  use_backend be_sni if sni
  default_backend be_no_sni

backend be_sni
  server fe_sni 127.0.0.1:10444 weight 1 send-proxy

frontend fe_sni
  bind 127.0.0.1:10444 ssl no-sslv3 crt /usr/local/etc/haproxy/default_pub_keys.pem crt-list /usr/local/etc/haproxy/cert_config.map accept-proxy
  mode http
  option httplog

  http-request del-header Proxy
  http-request set-header Host %[req.hdr(Host),lower]

  use_backend samlpe_service

backend be_no_sni
  server fe_no_sni 127.0.0.1:10443 weight 1 send-proxy

frontend fe_no_sni
  bind 127.0.0.1:10443 ssl no-sslv3 crt /usr/local/etc/haproxy/default_pub_keys.pem accept-proxy
  mode http

  http-request del-header Proxy
  http-request set-header Host %[req.hdr(Host),lower]

  use_backend samlpe_service

backend samlpe_service
  mode http
  option redispatch
  option forwardfor

  option httplog
  balance leastconn

  timeout check 5000ms
  http-request set-header X-Forwarded-Host %[req.hdr(host)]
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
  http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=%[req.hdr(X-Forwarded-Proto-Version)]

  cookie ha-sticky insert indirect nocache httponly secure

  server sample_server canary-inner:8080 cookie 1 weight 1

I would expect ‘timeout client’ to actually be concerned with the idle times between req / res and ‘timeout http-keep-alive’ with res / req.
This also happens when there is only one frontend and one backend.

Can anyone confirm or have another explanation?

Thanks a lot!

Exactly what 1.8 release is this? Can you provide the output of haproxy -vv?

Are you testing this with HTTP/1.1 or are you also testing with H2?

this was originally discovered with 1.8.1 (as in minishift 3.11.) - and also reproduced standalone with the official haproxy docker image for 1.8.1:

$ docker run -it --rm  haproxy:1.8.1 haproxy -vv

HA-Proxy version 1.8.1 2017/12/03
Copyright 2000-2017 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-null-dereference -Wno-unused-label
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0f  25 May 2017
Running on OpenSSL version : OpenSSL 1.1.0f  25 May 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace

But it’s the same with latest 1.8(.25), 1.9(.15) and 2(.1.4) in alpine and non-alpine versions.

This is all HTTP/1.1
Happens on ipv4 and ipv6

Simplified the config

defaults
  timeout connect 5s
  timeout client 10s
  timeout client-fin 1s
  timeout server 15s
  timeout server-fin 1s
  timeout http-request 3s
  timeout http-keep-alive 20s

frontend public
  bind :80
  mode http

  use_backend samlpe_service

backend samlpe_service
  mode http

  server sample_server echo:8080

The difference in client and server timeout is for testing purposes (making sure it isn’t accidentally the server timeout killing the connection).

Client is sending tcp keepalive packets in 5s periods.

added my setup configuration here