HAProxy community

Haproxy not using full server resources for ssl offload

i have a haproxy setup on a virtual machine using kvm with 8 cores and 4Gb of memory.
i,m using it as a load balancer with SSL offloading(with verify required).
the server has a load average of 1.7 and is using only about a gigabyte of memory.
the SSL offload takes about 450ms.
my question is why the server is not using the full resources to decrease response time.
what is causing the event loop to stall.
i have enabled multi process in my config
here is the configuraion:

  nbproc 8
  cpu-map 1 0
  cpu-map 2 1
  cpu-map 3 2
  cpu-map 4 3
  cpu-map 5 4
  cpu-map 6 5
  cpu-map 7 6
  cpu-map 8 7

  log local0
  maxconn 20000
  uid 99
  gid 99
  tune.ssl.default-dh-param 2048
  tune.ssl.cachesize 1000000
  tune.bufsize 32768
  stats socket /var/run/haproxy1.sock mode 600 level admin process 1
  stats socket /var/run/haproxy2.sock mode 600 level admin process 2
  stats socket /var/run/haproxy3.sock mode 600 level admin process 3
  stats socket /var/run/haproxy4.sock mode 600 level admin process 4
  stats socket /var/run/haproxy5.sock mode 600 level admin process 5
  stats socket /var/run/haproxy6.sock mode 600 level admin process 6
  stats socket /var/run/haproxy7.sock mode 600 level admin process 7
  stats socket /var/run/haproxy8.sock mode 600 level admin process 8
  stats timeout 2m #Wait up to 2 minutes for input

#listen stats
#  bind :9001
#  mode http
#  stats enable
#  stats hide-version
#  stats realm Haproxy\ Stats
#  stats uri /haproxy_stats
#  stats auth  admin:sfPalang
#  stats admin if TRUE

  log     global
  mode    http
  maxconn 10000
  # option  httplog
  option  redispatch
  option  dontlognull
  retries                 3
  timeout http-request    10s
  timeout queue           1m
  timeout connect         10s
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 10s
  timeout check           10s

frontend https_frontend
  mode http
  option httpclose
  option forwardfor
  reqadd X-Forwarded-Proto:\ https
  SOME BACKEND CONFIGS (http backends with roundrobin config)

Unless those core are dedicated to this VM, what most likely stalls your VM is the hypervisor by giving the CPU away to other VMs.

You cannot use shared CPU cores for a performance critical role like this.

cores are didicated since i don’t have over commitment
i also checked ha proxy cpu behavior and each process spawned spends most of it’s time being idle which means the ha proxy event loop is waiting for some other process or …
is there any way to look at the events happening to figure out what is ha proxy waiting for
should i run in debug mode?
i will check running on a physical server to make sure kvm is not the problem

how are you benchmarking exactly? have you checked per core utilization? I guess if all request hit one or two cores instead of all of them, the overall global CPU utilization would look like that.

the haproxy is already under production load
i have haproxy exporter with grafana telling me the number of requestes plus time it takes to respond
i also have the value of backend response time
i run the following command on each socket file to determine idle time
echo “show info” | socat - /var/run/haproxy(NUMBER 1-8).sock

here is a sample output
Name: HAProxy
Version: 1.5.18
Release_date: 2016/05/10
Nbproc: 8
Process_num: 1
Pid: 16478
Uptime: 0d 4h43m16s
Uptime_sec: 16996
Memmax_MB: 0
Ulimit-n: 40114
Maxsock: 40114
Maxconn: 20000
Hard_maxconn: 20000
CurrConns: 144
CumConns: 3036889
CumReq: 3036889
MaxSslConns: 0
CurrSslConns: 144
CumSslConns: 3035581
Maxpipes: 0
PipesUsed: 0
PipesFree: 0
ConnRate: 208
ConnRateLimit: 0
MaxConnRate: 329
SessRate: 208
SessRateLimit: 0
MaxSessRate: 329
SslRate: 208
SslRateLimit: 0
MaxSslRate: 329
SslFrontendKeyRate: 69
SslFrontendMaxKeyRate: 175
SslFrontendSessionReuse_pct: 67
SslBackendKeyRate: 0
SslBackendMaxKeyRate: 0
SslCacheLookups: 431269
SslCacheMisses: 189418
CompressBpsIn: 0
CompressBpsOut: 0
CompressBpsRateLim: 0
ZlibMemUsage: 0
MaxZlibMemUsage: 0
Tasks: 167
Run_queue: 1
Idle_pct: 51