Hi, we are using haproxy 2.0.2 tcp proxying Redis.
config v1: maxconnrate 20k (global)
config v2: rate-limit sessions 1k (defaults)
and both having:
listen 25001
bind :25001
balance roundrobin
maxconn 16384
server 1 10.19.10.10:1061
server 2 0.0.0.0:0 disabled
server 3 0.0.0.0:0 disabled
server 4 0.0.0.0:0 disabled
[repeated for many]
When upgrading from cfg v1 to cfg v2, about 1 hour later, Haproxy takes up to 2000% cpu, as we are using 1 process 20 threads, master-slave mode in docker.
For the most offending frontend (there are many), from the haproxy_exporter, frontend session rate rises up to 1k, which is as expected, but backend session rate is much higher, at about 15k. We also see backend connection error/retry. And then Prometheus cannot scrape its metrics at all.
The redis client does reach to above 1k session rate occasionally, that is why we are rate-limit sessions. We expect the offending high session rate clients do not work normally (worsen by their bad retrying behavior), but the haproxy stay calm.
Why is backend session rate much higher than frontend session rate?
We have tried to reproduce the 2000% cpu with iptables drop/reject traffic between haproxy and backend Redis, backend session rate is all the same as frontend.