Nginx uses the leaky bucket method to limit request rate. Nginx http_limit_req_module.
It means that if I set a limit to 100 req/sec, and then I get flooded by 120 req/sec, 100 requests will be served normally but 20 requests will be served by a 503 error.
How can I setup this with Haproxy?
I read a lot of the documentation about sc_http_req_rate but since the rate is always 120 req/sec. I am always returning 503 errors.
frontend main
bind *:80
acl foo_limited_req sc_http_req_rate(0) ge 100
http-request track-sc0 path table Abuse # Set the URI as the key of the table
use_backend bk1 if foo_limited_req
default_backend web
backend web
server web1 192.168.0.10
backend Abuse
stick-table type string len 128 size 100K expire 30m store http_req_rate(1s)
backend bk1
server listenerror 127.0.0.1:81
listen errorlistener
bind 127.0.0.1:81
mode http
errorfile 503 /etc/haproxy/errors/200-tuned.http
I want to serve the flow of 100 req/sec with web backend. And the 20 req/sec surplus with bk1 backend. I am using Haproxy version 1.9.2.