I have haproxy.cfg content like this
....
peers ha-peers
peer peer1 <IP1>:1024
peer peer2 <IP2>:1024
table http_407 type ip size 100k expire 1m store gpc0
table http_466 type ipv6 size 100k expire 1m store gpc1
frontend http-proxy
mode http
bind *:9000-9010 defer-accept
default_backend http-proxy
acl many_466 sc1_get_gpc1 gt 100
acl many_407 sc0_get_gpc0 gt 60
http-request track-sc1 src table ha-peers/http_466
http-request track-sc0 src table ha-peers/http_407
http-request deny deny_status 429 if many_407
http-request deny deny_status 429 if many_466
http-response sc-inc-gpc0(0) if { status 407 }
http-response sc-inc-gpc1(1) if { status 466 }
...
I have 2 loadbalancer servers running haproxy and used this configuration. I defined peers and sticktable share between them
So sticktable will track request and based on the response status code, it will increase number and block based on my acl rule.
The point is: I have an API, that allow trigger command clear table with this format
echo 'clear table ha-peers/http_466' | sudo socat stdio UNIX-CONNECT:/var/run/haproxy.sock
If the stick table not update by too much process, above command work like a charm. But when I release to production where have many many requests , the command not works anymore.
I’m thinking about the mutual update of stick table not working in this case.