Regarding maxconn parameter in backend for connection queueing


#1

Team,
I would like to achieve queuing solution when specifying the “maxconn” in backend , seems maxconn is applicable only for concurrent requests.

my requirement is to allow only 100 users/requests and further connections/requests to be queued up and released when any of the connected connections are released from backend server. How do I acheive this?

should I set maxconn in frontend as well to restrcit and queue the connection?

HAPROXY 1.6.12
Red Hat Linux 7.2 (3.10.0)
backend server is “Weblogic 12c (12.1.3)”

below are my current setting

global
daemon
maxconn 4096
defaults
log global
mode http
option http-keep-alive
timeout connect 6000000ms
timeout client 6000000ms
timeout server 6000000ms
timeout queue 60000ms

listen stats
bind *:9090
mode http
stats enable
stats refresh 5s
stats realm Haproxy\ statistics
stats uri /
stats auth weblogic:weblogic1

frontend http-in

timeout client 5000ms

maxconn 8

bind *:8050
default_backend servers

backend servers
balance roundrobin
cookie prefix nocache
option prefer-last-server
server srv1 : check cookie srv1 maxconn 4

Regards,
Vel


#2

[quote=“dvelan, post:1, topic:1320”]
my requirement is to allow only 100 users/requests and further connections/requests to be queued up and released when any of the connected connections are released from backend server. How do I acheive this?[/quote]

That’s exactly what maxconn on the server line is for. I’m not sure what you are trying to achieve that isn’t covered by maxconn?

No, maxconn on the frontend or process level has different behavior. Read the documentation for more details.


#3

Thanks for your response.

For testing purpose, The behaviour what I observe when setting “maxconn 4” is, it does queuing when concurrent connection more than 4 at a time, but queued connection immediately get released after the the first 4 connections are established, it does not check whether back-end weblogic 12c has any capacity to handle the load.

My requirement is I want to allow only maximum 4 connections, after which all further connections to be queued and they should be released only when any connected HTTPsessions among first 4 is logged out/freed

This is for PeopleSoft application, which uses a cookie to maintain the session with client, I am using haproxy as a proxy before weblogic to do the queuing.

To me, ha proxy does not check the back-end connected connections before releasing the queue.

Can you please let me know on what basis, queued connections are released?

Is there any other parameter required to acheive my requirement?

Is there any limitation while using HA Proxy red hat Linux 7.2 and back-end weblogic server.

Kindly assist me, your help is much appreciated.


#4

The basis should be in-flight HTTP transactions (so it is waiting for your backend to complete the respones, but it isn’t considering idle keep-alive sessions).

If you don’t want this, you will have to disable HTTP keep-alive on the backend, for example by using “option http-server-close”. But that would only increase the load on haproxy and your backend, I don’t see how that would make any sense.


#5

Thanks.
By default, in PeopleSoft, Weblogic maintain a persistent session in Java heap once user authenticated until user log out .
I know how many users can be supported by my single back-end weblogic instance (ex : 100), so more than 100 user connections, weblogic will start throwing out of memory and thread errors unless some users logged out completely from first 100, when memory issue starts, weblogic will not accept new connections and connected users will also be affected. to avoid the said issue, we want to implement some queuing solution using HAProxy, basically is there anyway for haproxy to count even idle HTTP alive connections for queuing?

In PeopleSoft, there are 2 scenarios where session will be terminated, one is when user log out explicitly, second one is when idle time out occurs in 20 minutes

Is there any other way I can achieve my requirement using haproxy.

Your response is much appreciated.

Regards,
Vel


#6

Are you saying your application allocates memory and only releases it, when the idle HTTP session is closed? Then your application is horribly broken, and you ought to fix that.

Like I said, to disable HTTP keep-alive from haproxy to your server, use “option http-server-close”.


#7

To maintain user session info in weblogic , it requires memory from Java heap size, the Java heap will be released when session is logged out.

Now connection queueing occurs when setting maxconn in listen block(frontend)

I would like to check when connections in the queue from OS kernel , is there anyway to display user with some custom message? Ex: you are in queue , pls wait.


#8

If your weblogic logs the users out, when the TCP connection closes, then you can use the configuration I proposed (http-server-close + server maxconn).

When it doesn’t, I don’t see how haproxy would be able to know how many sessions are currently allocated in the backend, so there is no way for haproxy to queue.

No, connection queueing is done in the backend on the server line. When you specify maxconn in frontends or bind lines, the queueing will happen in the kernel (which you only want when haproxy overloaded, NOT in case your backend is overloaded).

[quote=“dvelan, post:7, topic:1320, full:true”]I would like to check when connections in the queue from OS kernel , is there anyway to display user with some custom message? Ex: you are in queue , pls wait.
[/quote]

If the connection is stuck in the kernel queue, by definition the application (haproxy) doesn’t know about it, so it is not possible to emit any errors.

With queueing in haproxy, when timeout queue strikes:
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-timeout%20queue

haproxy will send a 503 error, which you can customize:
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-errorfile


Custom display message when setting MAXCONN in front-end/Listen Block
#9

thanks much for the clarification.

Regards,
Vel