How to rate limit with exception

I’m looking to rate limit calls to certain endpoints with different rates.

For example:

api.testserver.com/v1/accounts can be requested 1 time per second
api.testserver.com/v1/images can be requested only 1 time per 30 seconds
etc…

In addition, I would like to whitelist some IP addresses where there would be no rate limit at all.

Is this possible at all?
I would appreciate any kind of help.

Thank you

I’ve made some progress

frontend ft_http

bind :80
mode http
stats enable
stats auth admin:password
stats refresh 30s
stats show-node
stats uri  /haproxy_adm_panel
stats admin if TRUE

# Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter
# Monitors the number of request sent by an IP over a period of 10 seconds
stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
acl white_list src 127.0.0.1 109.109.134.237
tcp-request connection track-sc1 src

# refuses a new connection from an abuser
tcp-request content reject if { src_get_gpc0 gt 0 } !white_list

# returns a 403 for requests in an established connection
http-request deny if { src_get_gpc0 gt 0 } !white_list

default_backend bk_http
 
backend bk_http ###HOSTS ARE ADDED TO THIS BACKEND BY DEFAULT

# If the source IP sent 10 or more http request over the defined period, 
# flag the IP as abuser on the frontend
acl abuse src_http_req_rate(ft_http) ge 10
acl flag_abuser src_inc_gpc0(ft_http) ge 0

# Returns a 403 to the abuser
http-request deny if abuse flag_abuser

server webserver1 172.18.2.86:80 check

This successfully rate limits the http requests but

1- white list doesn’t work.
2- rate limit is global and not per endpoint.

any ideas?

What if you put tcp-request content accept if whitelist before the reject statement?

If you use multiple backends with their own stick tables, then it could be per endpoint.

Do you have an example of that nictrix?