Concurrent Connections over sessionid

Hello all

I have a http frontend which goes via URL to a certain backend.

Now I want to achieve that the backend only takes a certain number of clients and otherwise redirects to the next one or displays an error status page.

So far I only found how to forward the new requests to the next server with a certain number of connections.

But it can be that several clients arrive over the same source ip. Therefore you would have to split the requests based on header or cookie

Can I get some help here?

Best regards

How does the configuration look like currently?

Hey sorry :slight_smile:

Here is the config

global
        log /dev/log    local0
        log /dev/log    local1 debug
        log 127.0.0.1 len 8096 local2
        log-send-hostname
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        maxconn 4096
        tune.ssl.default-dh-param 2048

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private
		
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11

defaults
        log     global
        mode    http
        option  httplog
        log-format {\"haproxy_clientIP\":\"%ci\",\"haproxy_clientPort\":\"%cp\",\"haproxy_dateTime\":\"%t\",\"haproxy_frontendNameTransport\":\"%ft\",\"haproxy_backend\":\"%b\",\"haproxy_serverN
ame\":\"%s\",\"haproxy_Tw\":\"%Tw\",\"haproxy_Tc\":\"%Tc\",\"haproxy_Tt\":\"%Tt\",\"haproxy_bytesRead\":\"%B\",\"haproxy_terminationState\":\"%ts\",\"haproxy_actconn\":%ac,\"haproxy_FrontendCurr
entConn\":%fc,\"haproxy_backendCurrentConn\":%bc,\"haproxy_serverConcurrentConn\":%sc,\"haproxy_retries\":%rc,\"haproxy_srvQueue\":%sq,\"haproxy_backendQueue\":%bq,\"haproxy_backendSourceIP\":\"
%bi\",\"haproxy_backendSourcePort\":\"%bp\",\"haproxy_statusCode\":\"%ST\",\"haproxy_serverIP\":\"%si\",\"haproxy_serverPort\":\"%sp\",\"haproxy_frontendIP\":\"%fi\",\"haproxy_frontendPort\":\"%
fp\",\"haproxy_capturedRequestHeaders\":\"%hr\",\"haproxy_httpRequest\":\"%r\"} #test
        option  dontlognull
        timeout connect 10000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http
        errorfile 429 /etc/haproxy/errors/429.http
        option forwardfor


frontend http_frontend
    bind *:80
    mode http

    capture request header Host len 30
    capture request header User-Agent len 200
    capture request header Referer len 800
    capture request header X-Forwarded-For len 20
    http-request add-header X-Forwarded-Host %[req.hdr(host)]
    http-request add-header X-Forwarded-Server %[req.hdr(host)]
    http-request add-header X-Forwarded-Port %[dst_port]


    default_backend www-backend
	
	
	
backend www-backend
    mode http
    balance roundrobin
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    option httpchk HEAD / HTTP/1.1rnHost:localhost
    redirect scheme https code 301 if !{ ssl_fc }
	

backend letsencrypt-backend
    mode http
    server letsencrypt 127.0.0.1:8888
	

frontend https_frontend
    bind *:443 ssl crt /etc/haproxy/certs
    mode http
	
    tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
    tcp-request inspect-delay 5s
    tcp-request content accept if { req.ssl_hello_type 1 }

    reqadd X-Forwarded-Proto:\ https

    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    use_backend letsencrypt-backend if letsencrypt-acl
	
    use_backend 0000-webservice if { req.hdr(Host),regsub(:[0-9]+$,) -i webservice.example.com
    

backend 0000-webservice
    mode http
	balance source
    server 0000-webservice01 10.0.0.1:80 check
    server 0000-webservice02 10.0.0.2:80 check	

Thank you

You configured balance source. Can you elaborate why you need source IP persistence? This goes against of what you are trying to achieve.

Understanding your exact application and backend server requirements is necessary to be able to make suggestion regarding the load-balancing.

My problem is that I do not know how to configure this.

I only found scenarios where I can limit the connections per second and not clients.

I would have to filter sessions, but I don’t know how this scenario could look like.

I want a maximum of 10 browsers to be able to connect to Server1 at the same time(e.g. 10.0.0.1) the next 10 to Server2(10.0.0.2)

With the current configuration I only have the limit on source IP, which doesn’t help me if multiple clients have the same public ip.

That’s wrong, with the current configuration are you not limiting by source IP. I think you have mixed up some of the concepts here.

First of all, I don’t think you need source IP persistence, so the first thing you should do, is remove the balance source configuration. It also seems a source of confusion in this case.

Since you are trying to keep connections to each server as low as possible, I’d suggest you go with source leastconn.

More about the balance keyword in the docs:
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#4.2-balance

On each server then, you’d add a maxconn 10 keyword, which will limit the amount of connections to a single server to 10, which also means there won’t be more than 10 transactions in flight towards this server. Which is what you would like to achieve in the end, right?

Your intention is to avoid having the server work on more than 10 requests simultaneously, correct?

I have the leastconn with maxconn on another service, this allows me to prevent the clients from switching between the servers.

I don’t care about the requests in my currently searched scenario, I really care about sessions.

I have 10 workstations and a public ip, which the workstations use to connect to my service.

I want to connect the first 5 workstations(or browsers) on the first server and each additional workstation on the second server.

Therefore I think that this can only be achieved with SESSIONID, Cookie or something else, but I didn’t find anything about it.

Then use balance first instead, and specify maxconn 10 for each server.

I think you need to explain what actual problem you are trying to solve, otherwise I’m pretty certain this is not gonna go anywhere.