Active-Passive stickiness not working as blogged

like here written:
http://www.haproxy.com/blog/emulating-activepassing-application-clustering-with-haproxy/

Tested an all versions of last ~ 6 months from debian Backport… actual this is

 # haproxy -v
 HA-Proxy version 1.7.5-2~bpo8+1 2017/05/27

My setup:

haproxy are each on same servers/different port as gearmand as frontend:

peers LB
        peer    gearman-jobserver-euc1-01               172.31.18.242:8999
        peer    gearman-jobserver-euc1-02               172.31.7.104:8999

backend BE_gearman-jobserver_staging
        mode                    tcp
        fullconn                10000
        email-alert             mailers sendmail
        email-alert             level   alert
        email-alert             from    haproxy@gearman-jobserver-euc1-01.xxx
        email-alert             to      sysops@xxx
        timeout                 client 60s
        timeout                 client-fin 60s
        timeout                 server 60s
        timeout                 tunnel 1h
        option                  tcp-check
        stick-table             type integer size 1 nopurge peers LB
        tcp-check               send STATUS\r\n
        tcp-check               expect string . comment Minimum\ empty\ response.
        stick on                dst_port
        server                  gearman-jobserver-euc1-01               172.31.18.242:40025 check inter 2s fastinter 1s downinter 20s fall 3 rise 2
        server                  gearman-jobserver-euc1-02               172.31.7.104:40025 check inter 2s fastinter 1s downinter 20s fall 3 rise 2 backup

and the default socket status is fine:

root@gearman-jobserver-euc1-01:~# echo "show table BE_gearman-jobserver_production" | socat unix:/run/haproxy/admin.sock -
# table: BE_gearman-jobserver_production, type: integer, size:1, used:1
0x5565daacc984: key=50005 use=0 exp=0 server_id=1

but the problem is that if server01 is down the stays on “server_id=1” and didn’t switch as expected.to server_id=2.

Do I missed some requirements which weren’t written in blog / I cannot find in documentation or is this a bug ?

Thanks and Bests

Reiner

Did you ever figure this out? I am trying to do active/passive backend a similar way. If the active goes down, it correctly goes to the backup, but I want it to stay on the backup even when the failed server comes back up but it ends up going back to the newly restored server.

There was no answer here or by mail so I still have this problem; luckily the service is very stable so we haven’t the switching problem except in maintenance where I can do manually workaround for it.

So far I remember I had figured out that the order of the server lines must be in same order to have same server_id in sync communication which is (was?) not documented.

In your case it sounds that you have more the problem that fallback is working but didn’t stay on backup server.
=> do you have stickiness activated? Then at least the active sessions should stay on it (if not wanted there is a non-stick option for it).

As far as I can see, either the table size needs to be > 1 (or more specifically the amount of different keys in the table + 1), or you must not set nopurge. If both are used, I don’t see how the table can be updated.

I appears the first draft oft the blog post used a >1 table size, and was later updated without testing. Also see the comments below the blog post.

@Baptiste can you confirm this and maybe update the blog post?

@Reiner030 notice that you stick on the dst_port, not the dst. Try increasing the table size by setting it to the total amount of possible dst_port 's in your configuration plus one.

Yes, the stick table is a key → value store, the value will be the server ID. Therefor either the servers are in the same sequence on all haproxy instances or the ID must be specified manually to match.

If you think the documentation could be improved, please make a concrete suggestion (or even better, send a doc patch based on CONTRIBUTING). Developer provided documentation often lack details that may seem obvious but aren’t at all clear to users. That’s why we rely on users making concrete suggestions.