I am not much familiar with HAProxy internals so please forgive me if this question has been asked earlier. I have gone through all the HAProxy docs and this forum’s questions related to nbproc setups but couldn’t find anything related to our issue.
So we have a multi-processor machine where I am using nbproc 32 in order to distribute the load and make sure the maximum performance is achieved. However, the downside of using nbproc 32 is that all the processes are performing health checks on the backend server which is creating a lot of load. If we convert it to nbproc 1 and nbthreads 32, the issue seems to be resolved but I am not sure if HAProxy will be utilizing all the CPUs available to it?
Anyone providing help will really be appreciated. We are using HAProxy 1.8.
I’m not sure what you are asking.
Can you elaborate why you think multi-threading would not use all CPUs?
Thanks a lot for your response. As I mentioned I am not familiar with the internals of HAProxy and so I thought may be the port check is done only by the parent process and not by the child threads. But what I am really asking is how to gain maximum performance by utilizing all the CPU power available while not overloading the intermediate devices and end server with too many health checks?
Each process is running it’s own health checks.
Each thread does not run it’s own health checks.
By using multi-threading as opposed to multi-process mode, you don’t hammer your backend server with health checks.
Multi-threading is the way to go, as you already suggested.
I still don’t see a question in there that you didn’t already answered yourself so I’m gonna ask: do you have performance issues with multi-threading that you didn’t have with multi-process mode?
1 Like
Thanks a lot Lukastribus. You are right. Actually I just wanted to confirm my solution through an expert eye like yours. Now that you have confirmed that the checks are indeed performed at the process level and not threads, we will update our configuration accordingly. Once again thanks a lot for your time and answers.