Any issues with large map files?

We have been using haproxy map files to manage lists of redirects and rewrites on our web site, usually using path but sometimes also path_beg.

A sample config snippet from a test instance:

http-request redirect location
     %[path,lower,map(/etc/haproxy/big-redir-map.txt)] code 301
  if { path,lower,map(/etc/haproxy/big-redir-map.txt) -m found }

Based on my tests haproxy is easily able to handle very large map files quickly and efficiently, with minimal impact on memory consumption.

I tested with map files containing 0, 10k and 100k entries and the difference in performance seemed to be negligible: my haproxy instance on a small test system was able to serve 7k reqs/second even with 100k entries in the redirect map.

Before we commit to building a system that manages redirects in this way, are there any potential issues, limits or other things we should be aware of when using large map files?

Our server typically handles about 1000-2000 requests/second and has lots of spare memory but the CPUs are quite busy.

We would likely be starting out with smaller map files (maybe 20k-30k entries) but it would be nice to know if we can plan to scale this up to 100k+ without worrying about the size of the map files.

No issue, that is the use-case for map files, to match (very) large amounts of data.

1 Like