Can my haproxy.cfg be improved?

I’ve been using this config for a while and it’s been working fine, but I recently began wondering whether it could be improved - could someone take a look and let me know if it’s optimal or not please?

global
  #
  #
  # to have these messages end up in /var/log/haproxy.log you will
  # need to:
  #
  # 1) configure syslog to accept network log events.  This is done
  #    by adding the '-r' option to the SYSLOGD_OPTIONS in
  #    /etc/sysconfig/syslog
  #
  # 2) configure local2 events to go to the /var/log/haproxy.log
  #   file. A line like the following can be added to
  #   /etc/sysconfig/syslog
  #
  #    local2.*                       /var/log/haproxy.log
  #
  # log         127.0.0.1 local2


  tune.ssl.default-dh-param 2048

  ssl-default-bind-options no-sslv3 no-tls-tickets
  ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

  ssl-default-server-options no-sslv3 no-tls-tickets
  ssl-default-server-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA



  # chroot      /var/lib/haproxy
  pidfile     /var/run/haproxy.pid
  user        haproxy
  group       haproxy
  daemon

  # turn on stats unix socket
  stats socket /var/lib/haproxy/stats

  tune.ssl.default-dh-param 2048

defaults
  mode                    http
  log                     global
  option                  httplog
  option                  dontlognull
  option http-server-close
  # option forwardfor       except 127.0.0.0/8
  option forwardfor
  option                  redispatch
  retries                 3
  timeout http-request    5s
  option http-buffer-request
  timeout queue           1m
  timeout connect         10s
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 10s
  timeout check           10s
  maxconn                 30000

    
frontend http-in
  bind *:80
  bind :::80
  bind *:443 ssl crt /etc/haproxy/certs/ no-sslv3 no-tlsv10
  bind :::443 ssl crt /etc/haproxy/certs/ no-sslv3 no-tlsv10
  acl letsencrypt-acl path_beg /.well-known/acme-challenge/
  use_backend letsencrypt-backend if letsencrypt-acl
  default_backend main_apache_sites
  http-request add-header X-Forwarded-Proto https if { ssl_fc }

  # Define hosts
  redirect prefix http://site-one.com code 301 if { hdr(host) -i www.site-one.com }
  acl host_site-one hdr(host) -i site-one.com
  redirect prefix http://site-two.com code 301 if { hdr(host) -i www.site-two.com }
  acl host_site-two hdr(host) -i site-two.com

  #Redirect sites to HTTPS
  acl ssl_redirect_hosts hdr(Host) -i site-one.com
  acl ssl_redirect_hosts hdr(Host) -i site-two.com
  redirect scheme https if ssl_redirect_hosts !{ ssl_fc }
  redirect scheme https code 301 if !{ ssl_fc }


  # figure out which one to use
  use_backend site-one_docker if host_site-one
  use_backend site-two_docker if host_site-two


backend main_apache_sites
  server server1 127.0.0.1:8080 cookie A check
  cookie JSESSIONID prefix nocache

backend site-one_docker
  server server2 127.0.0.1:8889 cookie A check maxconn 5000
  cookie JSESSIONID prefix nocache

backend site-two_docker
  server server3 127.0.0.1:8894 cookie A check
  cookie JSESSIONID prefix nocache

backend letsencrypt-backend
  server letsencrypt 127.0.0.1:55555

Thanks in advance for any help.

Hope it’s ok to bump this but did anyone have any thoughts on this after? This config is a good few years old now and I’m curious whether there might be a better more up to date approach.

I don’t see anything wrong with this configuration.

A few minor suggestions:

  • set maxconn in the global section, this requires some thinking and planing, which is the point, because you then know how much connections haproxy will actually handle, how much ram will be required, and also haproxy will initialize the correct amount of resources. The alternative is: undefined behavior (when your hit either implicit limits or run out of memory)
  • you may not want to healthcheck backend servers, if all you have is a single server anyway without any backup servers
  • what’s the reason for option http-server-close? You may want to allow full keep-alive
1 Like

Thanks for the reply Lukas :heart:

Is there a formula to set this to if you know how much ram the server has? (Mine has 64GB)

Or are you suggesting it is better to remove from defaults and instead add per backend instead? (As I have for backend site-one_docker in my example).

Do you mean check in the backends, eg: server server1 127.0.0.1:8080 cookie A check. Since these are running on a single server I can safely remove this? Or do you mean timeout check 10s from the defaults?

I’m not sure tbh Lukas, looking at my notes it looks like that was what I’ve had since I started using HAProxy, so I am guessing was recommended to by the person who helped me set it up. I will comment it out and see if it makes any difference.

So maxconn has different meaning in different places so it’s important to understand that I was talking about maxconn in the global section, which affects the entire process (meaning memory initialization at startup as well as maximum memory usage under load).

However this is orthogonal to the maxconn configuration in the default section: maxconn in the default section will propagate to every single frontend section.

It’s also orthogonal to maxconn on a server line: this will limit the connections to this specific server and queue additional connection (up to timeout queue).

Those 3 maxconn settings are not or only slightly related.

You have just one frontend section, so that makes things easier.

Now memory usage is about 16 kb per connection, considering we need 2 connections to passthrough a http transaction, we should think about 33 kb overall (as per global maxconn docs).

This is without additional features like SSL. I don’t recall on the top of my head the memory consumption per ssl connections, but its probably be a safe to just double it once again, so you’d end up with 64 kb per connection (accounting for both frontend with ssl and a backend connection).

Considering the 30k in your frontend I’d suggest something to consider a global maxconn number 50000, so you’d end up with 3.2 GB of memory used by haproxy (when under heavy load).

However if you have lots of free memory, you can of course stay larger than this. But knowing what limits you set and how much memory usage you can expect helps you scale.

Also think about that memory usage may double or triple again for a short period of time when you reload haproxy and old connections still hang on to the old process.

hard-stop-after can be used to limit this amount of time.

Yes, exactly. Since you are not load balancing or failing over the same application between multiple servers it is not needed.

There may be historic context to add here:

Back in the day when full http keep alive was not supported and the default mode was to close every single HTTP transaction, this option enabled client side keep-alive and therefor improved performance.

However since a long time haproxy supports full keep-alive (even on the backend side), so this option was probably added to improve performance at the time however now the default (which is http-keep-alive) will perform better.

1 Like

Thank you for the in-depth explanation - that makes so much sense! There was actually a maxconn in the global section as well as the defaults section and I just assumed it was duplicate so removed the one in the global section :sweat_smile: I’ve put it back into the global section and set it to 50000 as you suggested. I also a copied parts of your post in as comments so I can easily refer back to them in future :grinning:

Thank you Lukas, I have removed them.

Should I remove or comment out timeout check 10s from the defaults section too?

Ah I see. Thanks again for the explanation - I have removed it.

I’ve just implemented all your changes and I think the server definitely feels a bit snappier! I wish I had posted this thread a lot earlier now - thank you Lukas! :heart:

1 Like