Using multiple cpu cores, any specific drawback?

Our haproxy is receiving traffic where 80% requests are on http while 20% on https. We want to move entire traffic to HTTPS. Done some benchmarking, first with single core and then using multiple cpu cores and observed good performance improvements while using multiple cores. Scanned through documentation but not able to find any specific text about more challenges/issues which could occur except where multi core use is strongly discouraged. We are bit concerned after reading that statement and looking for some pointers to help us take more informed decision.

Config we are using with multi core enabled in AWS instance having 16 core cpu with 32 GB of RAM:

global
        log-send-hostname msg-haproxy-log.example.com

        log 127.0.0.1 local0
        log 127.0.0.1 local1 notice

        maxconn 500000
        user haproxy
        group haproxy
        daemon
        stats socket /var/run/haproxy.socket level admin
        tune.ssl.default-dh-param 2048
        nbproc 9
        cpu-map 1 0
        cpu-map 2 1
        cpu-map 3 2
        cpu-map 4 3
        cpu-map 5 4
        cpu-map 6 5
        cpu-map 7 6
        cpu-map 8 7
        cpu-map 9 8
        stats bind-process 9

defaults
        log     global
        option  dontlognull
        retries 3
        option redispatch
        maxconn 500000
        timeout connect   300000
        timeout client       660000
        timeout server      660000

frontend tcp-in
        mode tcp
        bind *:1883 
        bind *:8883 ssl crt /etc/ssl-certs/primary.pem
        option tcplog
        bind-process 1
        default_backend tcp-backend

frontend tcp-in2
        mode tcp
        bind *:2883
        bind *:9883 ssl crt /etc/ssl-certs/primary.pem
        option tcplog
        bind-process 2 3 4 5
        default_backend tcp-backend2


frontend api
        bind *:80
        bind *:443 ssl crt /etc/ssl-certs/primary.pem
        rate-limit sessions 6000
        monitor-uri /health-check
        mode http
        bind-process 6 7 8 9
        default_backend api-backend

backend api-backend
    balance roundrobin
    mode http
    option httplog
    server  http1 192.168.0.141:8282        check
    server  http2 192.168.0.141:8283        check
    server  http3 192.168.0.141:8284        check
    server  http4 192.168.0.141:8285        check
    server  http5 192.168.0.141:8286        check
    

backend tcp-backend
       option forwardfor except 127.0.0.1
        balance roundrobin
        mode tcp
        option  tcplog
        server  mqtt1 192.168.0.141:3883    check
        server  mqtt2 192.168.0.141:3884    check
        server  mqtt3 192.168.0.141:3885    check
        server  mqtt4 192.168.0.141:3886    check

backend tcp-backend2
       option forwardfor except 127.0.0.1
        balance roundrobin
        mode tcp
        option  tcplog
        server  mqtt1 192.168.0.126:4883        check
        server  mqtt2 192.168.0.126:5883       check
        server  mqtt3 192.168.0.126:6883       check
        server  mqtt4 192.168.0.126:7883       check

+1 why discouraged usage of an available feature of haproxy?
We are also concerned that when we soon needs to load more traffic through our 1.7.3 haproxy, a single process will not to able to handle the traffic (idle pct converting towards zero, is this a sign that a single core can’t keep up on current cpu clock frequency?)
If so what other alternativ is there then recommended if not using multiple processes on a multiple core server?

How is it with stats when nbproc>1, are such reported per process (configuration example please both for http and socket stats)?
Would we then need to collect stats from a socket per process and potentially aggregate such data if desired?