HAProxy community

High memory usage with many SSL certificates and slow (re)load

On 2.0.12 servers with around 18000 RSA SSL certificates (mainly LetsEncrypt certs) loaded with crt-list, each HAProxy worker threads uses around 10Gb or RAM (only 200Mb if the crt-list file is empty) and the reload time of HAProxy is of about 4 to 5 minutes on a server with a Xeon E3-1241 v3 with 32Gb of RAM and the certificates on a tmpfs partition.

Is there any way to optimize the memory usage and/or reload time?

The relevant configuration parts are (there are about 35 identical “bind” entries with different IPs)) :

global
tune.ssl.default-dh-param 2048
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
ssl-default-bind-ciphers AES128+EECDH:AES128+EDH
tune.ssl.ssl-ctx-cache-size 4000
nbproc 1
nbthread 7
cpu-map auto:1/all 0-

frontend frontend-ssl
bind $IP:443 ssl crt /path/wildcard.defaultdomain.com.pem crt /path/wildcard.otherdomain.com.pem crt-list /path/ssl-tmpfs/crt.list alpn http/1.1

I assume you mean worker process, otherwise you’d by using 70 Gb of RAM with the 7 threads you are using.

Identical as in the same certificates/certificate-list are used for all of those?

Then I’d suggest to unify them using a single bind statement instead, that should improve the situation by a factor of 35.

I believe 2.2-dev can do some deduplication here, but I’m not sure if it covers all cases and if it also improves reload time (or just memory consumption).

Yes i was talking about worker processes and not threads.

And indeed the multiple IPs on a single “bind” statement did the trick, it now uses around 850Mb of RAM and takes only ~15seconds to reload, thanks for the trick.

A deduplication would be nice as it is loading the same file for multiple crt-list statements and the crt-list is still used twice as its also used on another frontend (but its now less of an issue in my case).

Thanks for the solution :slight_smile:

1 Like