Respond with errorfile if max_conn reached in backend



Is it possible to send an errorfile to the remoteclients if the max_conn off the backend is reached?



The error is 503 and you can specify it with:

This fires after timeout queue:


You can trigger this in your frontend by using some sample fetches:




Tankyou for your answers. I thougt, when maxconn in backend is reached the haproxy by default throughs a 503. But mine doesn’t. Am i wrong?


What happened is just a timeout on the remoteclient.


Then it is not related to maxconn.

Configure logging and provide the log line when the client sees the timeout.


Do you have a suggestion for log-config? Thankyou!


Basic logging:

 log syslog debug
 log global
 mode http
 option httplog


Hmm, there is no request in the logs, , when i set frontend to maxconn 0.

this is my config:

log /dev/log local0

log /dev/log local1 notice

    log /dev/log    local1 debug
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    maxconn 8192
    tune.ssl.default-dh-param 2048

log global
mode http
retries 3
option forwardfor
option httplog
option redispatch
timeout http-request 30s
timeout connect 30s
timeout client 30s
timeout server 60s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/500.http
errorfile 503 /etc/haproxy/errors/500.http
errorfile 504 /etc/haproxy/errors/500.http
stats enable
stats uri /haproxystats
stats realm Haproxy\ Statistics
stats auth u:p

frontend http
redirect scheme https if !{ ssl_fc }

frontend https
ssl-default-bind-options no-sslv3 no-tls-tickets
mode http
option forwardfor
option http-keep-alive
reqadd X-Forwarded-Proto:\ https
maxconn 1000
use_backend store_static if { path_beg /media }
default_backend nodes

backend web_dyn
mode http
option forwardfor
option http-keep-alive
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
balance leastconn
# If the source IP sent 10 or more http request over the defined period,
# flag the IP as abuser on the frontend
option httpchk HEAD / HTTP/1.1\r\nHost:\
server S1 check inter 30s maxconn 100

backend web_static
option forwardfor
option http-server-close
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
http-response set-header Strict-Transport-Security "max-age=31536000"
balance roundrobin
server S4 check inter 30s


Haproxy is: HAProxy version 1.7.5-2~bpo8+1, released 2017/05/27 Debian Jessie

Strange: Even the maxconn is not 0, haproxy doesn’t give me an 503 or something… I just get a connection-timeout, no entry in the logs…


When you set maxconn to 0 in the frontend, haproxy does not accept a single request and everything is queued in the kernel, so nothing can be done in haproxy.

Configure realistic values for global/frontends, and use a low maxconn setting in backends/server.


Yes, i know this isn’t realistic with maxconn 0. It was a test-scenario.

All i try to archieve is:

I have a Haproxy in tls-offloading-front of five apache/php-webservers. Because off the limitations off the application, i will limit the maximum off concurrent sessions. Ones a client established a connection and get a session, he will get access to the app, all others will get a “comebacklater” messages as a 503 errorfile message from Haproxy.
It is mission critical to ensure: a user - once get a session/connection - can enter the backend-app-servers until session-ttl is over.
Thank you!


Exactly, that’s why you need to configure a low maxconn value on the backend and on the backend servers, BUT NOT on the frontend.

Set maxconn to 0 on backend servers if you want to simulate the maxconn reached case.


Hi Lukas,

Thank you for your time & answer.
In live-Scenario we actually limiting the frontend with 1000 maxconn to avoid app-failure because off overload in the backend. (We have some nasty ioncube-php code there. We can’t optimze here.)
For better performance we have option-keep-alive on the backend and using roundrobin. How do we limit an exact amount off user-sessions in this case?


Hi Lukas,

I tryed and it didn’t work.

in backend:

maxconn 0
server S1 maxconn 0

But i can load site and get no 503.
Whats my mistake?


Probably 0 is not a valid number for maxconn.

Just adding the “disabled” keyword to all servers will also lead to 503, try that.


OK, disabled works.

What i figured out is: Whatever i set as maxconn in frontend, i get automagically the /10 of the value in the backend, no matter, what i put in there. How to change or override this?


You can see this in the stats.


Actually the “maxconn” directive is not support in the backend section. Only per server maxconn is supported.
I was not aware of this either, I will have to dig into this.

Not sure why the value showed is maxconn/10 (it should be n/a), but it is NOT enforced. So it looks like whatever is in the backend session rate doesn’t matter.


So: To get my case work, i have t disable keepalives, right?
How can i make shure, a user, ones succsessfully established a session on the haproxy, will get the webapplication until his session expired and others, above the connection limit of the be, will get the 500/503 errorfile message?