I have the same issue. I’m using haproxy 2.0.13-2 and MariaDB 10.4.13 on Ubuntu 20.04 VMs.
When connecting directly to my first database server, I get responses about 2 times faster than when I use haproxy (I have 3 db servers of the same size working in a Galera cluster). When I increase the resources on the haproxy VM (CPU and memory), it does absolutely nothing better, suggesting there’s a configuration limit. The VM is never overloaded either, nor does it use a lot of RAM.
If I simply use a different database server for each web server (running a PHP app), it also works (so the Galera cluster works) but I would like to have something more robust.
I’m not using the clustercheck script yet (I’m on Ubuntu and so far mentions of clustercheck were mostly for CentOS), but a good number of tutorials don’t use it and just stop at “it’s working” and never get into the details of speed.
I also get many “MySQL server has gone away” errors once I get to generate a lot of traffic on my web servers (so a lot of requests to the DB).
When checking the (web) stats report, I see that the “Session rate max” is always at 60 or 61 for any given DB instance, even though the web server is waiting for the database responses to come.
I’m now playing around with the maxconn variable (if you use clustercheck from https://github.com/asiellb/mariadb-clustercheck your health check is made in http instead of directly in mysql but I’m not sure this changes anything to the following).
It appears you can set a maxconn for each line of “server” (as per the example at the end of https://www.haproxy.com/blog/play_with_maxconn_avoid_server_slowness_or_crash/) and that allows you to play, indeed, with the max sessions for each server, but it doesn’t really increase the “current” sessions a lot.