I can 'replace' iptables + ipset with acl src -f (?)

I was wondering what are the closest options to have something like iptables + ipset functionality in haproxy, and what the performance difference is. I have been looking a bit at this manual about dos protection[1].

Is the blacklisting with acl’s in files still the way to go with haproxy 2.2?

http-request deny if { src -f /etc/hapee-1.8/blacklist.acl }

I read some article that ipset had hardly any influence on the latency if it would contain ~50k entries. However you had to specify if you want a /24,/16 or individual ip set. Currently I have sets for rdp, web, smtp, pop/imap and ftp access, some have 60k entries.

From the example I see that I can list /23, /24 just in one file.

If specify in the file 1.1.4.0/24 and 1.1.4.1 will haproxy still process two lines or will 1.1.4.1 be disregarded?

Should this acl file be sorted? Or does it not matter how ip addresses are in the file, because haproxy optimizes this when loaded into memory?

How much memory would a list of ~50k entries use?

Is it better to just use one acl list for each service/frontend or is there next to no loss to use multiple, like this?

http-request deny if { src -f amazon-google.acl } || { src -f custom-blacklist.acl }

If I add an ip address to the acl via the admin socket[2], will this automatically be written to the acl file?

And a bit out of curiosity, how does blacklisting via acl’s compare to ipset in with added latency?

[1]

[2]

If you don’t care about a HTTP response, then it’s cheaper to abort the TCP connections:

tcp-request connection reject if ...

Neither has haproxy.

There is no need to worry about it, it’s efficient enough. If you want your file to be aggregated, because a 100M file turns into a 500k file if you do, then you need to take care of it.

No.

It doesn’t matter, haproxy can easily handle a lot more than 50k and is only limited by your RAM.

I suggest you test it, I guess we are talking about a few MB. If you use multi-process mode (nbproc > 1) - which you shouldn’t - then the RAM usage multiplies with each process of course.

It’s fine.

People run haproxy with millions of ACL entries in production. It’s extremely efficient and you have nothing to worry about.

Is it possible to read these files in default and then reference them in the frontend section
something like

defauls/global
xxx blacklistweb src -f clouds.acl
xxx blacklistweb src -f web.acl
xxx blacklistweb src -f test.acl

frontend http
mode http
bind 0.0.0.0:80
tcp-request connection reject if blacklistweb

It’s isn’t, as far as I know.