Is it possible to choose traffic path based on whatever is available at the moment, with priority?

Hello guys

I have a frontend and backend that looks like this:

frontend httpPort
    mode http
    bind 0.0.0.0:80

    default_backend httpPortTcpForwarder

backend httpPortTcpForwarder
    mode http
    server first_to_try  127.0.0.1:5588 
    server second_to_try 10.0.0.2:5588 
    server last_to_try 127.0.0.1:8080

For my application, first_to_try and second_to_try are most of the time closed (layer 4, connection refused), but open in certain occasions. I’d like to have haproxy attempt to communicate with first_to_try, if it gets “connection refused”, then move to the second option and try it. If it gets “connection refused” again, to move to the default_backend. Is that possible to achieve?

And if you’re wondering why… let’s just call it a blessing of letsencrypt.

Just for fun… I implemented this algorithm in rust… seems to be working.

Please don’t make me use it :slight_smile:

Is there a way to do this in haproxy?

Something like this might work for you…

backend letsencrypt-blessing
    # This is what I think you are looking for.
    # This will always try the first one that is available.
    balance first
    mode http
    # Add "check" so that HAProxy knows whether a backend server
    # can actually accept the next connection or not.
    server first_to_try  127.0.0.1:5588 check
    server second_to_try 10.0.0.2:5588 check
    server last_to_try 127.0.0.1:8080 check

The downside to this setup could be that, once first_to_try is up, every request will be delivered to it until it’s either down again (L4 Conn Refused) or it reaches its maxconn value (which is highly recommended to set if you use this setup).

For other ideas, maybe have a look at the HAProxy version 2.6.7-5 - Configuration Manual - Balance

Thank you for the answer. I really appreciate it.

Actually I’ve already tried something like this before and it doesn’t work. I assess the reason is that haproxy tries to statefully remain aware (programmatically speaking) of the state of each possible server/destination instead of establishing connections per request. I had already consulted the documentation of check before posting the question, and the worst part was that even passive checks require active checks to be working, which I found very weird… maybe it’s for some performance reason, I don’t know.

I can see in the logs when attempting to connect, which tells me that haproxy has cached the state “backend is available/not available”.

[ALERT] 003/071848 (2103942) : backend 'httpPortTcpForwarder' has no server available!

even though the backend is available at that moment.

And when starting haproxy, I can see (for only 2 destinations, btw):

[WARNING] 003/072349 (2105506) : Server httpPortTcpForwarder/httpPortTcpForwarderServer1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 003/072350 (2105506) : Server httpPortTcpForwarder/httpPortTcpForwarderServer2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 155ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

Notice how it already decides that there’s no more backup servers, which seems to be the basis of its decision.

If you check my (dumb) rust example, for every incoming connection, we decide the destination based on whether they’re available at the moment in order. The backend you provided looks as if it exactly does that, but for some reason it just uses the stored state to make that decision.

Is there a way to make the decision based on the state at the moment instead of using the stored state?

Spitting out random thoughts (it’s late here)…

If HAProxy says no server available, that means none of your servers are accepting requests. I assume you took last_to_try offline for testing purposes? If not, perhaps there is another problem?

HAProxy won’t cache their state if it’s not instructed to check them, but then it won’t check the server’s state at all and will just attempt to deliver all traffic to the first server regardless of whether it’s accepting connections or not. If it reaches a timeout, then HAProxy returns an error to the client (I think a 504).

Your rust example appears to only look at TCP (I think? I barely know Python and dabble in scripting, so other languages look foreign to me). Just a thought, if you only care about the L4 response (TCP) and not the L7 response (HTTP), perhaps set mode tcp and see if that achieves what you are looking for.

Side note: The documentation link was specifically to the “balance” explanation. My thought was that maybe one of the other balancing algorithms would make more sense.

Thank you again. I really appreciate trying to help.

In my haproxy config I do actually use tcp… and I prefer tcp over http. I’m trying everything I can. Switching everything and all possibilities and checking every time whether letsencrypt is able to do the verification in a dry-run.

Yes, you’re right. I’m testing now with just two. I can see that the second server never receives traffic.

So it seems that I need something that haproxy just cannot do… it either will try the first server and ignore the second, or attempt to cache the state when check is enabled. Damn!

I’m by no means an expert, but this is my understanding of it. My only sources are the docs and about 2 years of experience. Perhaps others will chime in with better news. :slightly_smiling_face: