Automatic failover without disrupting visitors


I’m very new to haproxy and having to learn on the fly because there was an urgent need to switch from nginx to haproxy (because nginx wouldn’t allow Host to be changed based on upstream server).

With nginx, I had:

nginx (public) > nginx (internal) >[private network]> remote server1
                                                    > remote server2

And the basic config was:

server max_fails=5 fail_timeout=1s;
server max_fails=5 fail_timeout=1s backup;

When a visitor requested and server1 went offline, the request would be served from server2 without any disruption nor any action required by the visitor. This would happen within a few seconds.

However, with haproxy, it seems to hang for 20 seconds and then the visitor has to reload the page. I’d like haproxy to function similarly and seamless switch over to a backup. Could you please advise?

With haproxy the setup is now:

nginx (public) > haproxy (internal) >[private network]> remote server1
                                                      > remote server2

Here is the config I’m testing:

        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy

        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        ssl-default-bind-options no-sslv3

        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5s
        timeout client  50s
        timeout server  50s
        timeout http-request 10s
        timeout http-keep-alive 60s
        timeout tunnel 50s

frontend group1
        bind *:8080
        stats enable
        stats uri /stats
        stats refresh 10s
        mode http
        http-response set-header X-Server %s
        default_backend nodes

backend group1
        mode http
        balance first
        option forwardfor
        http-send-name-header Host
        server id 1 weight 1 ssl check verify none
        server id 2 backup weight 2 ssl check verify none

The remote servers are in different geographic regions but I haven’t seen connect times longer than 200ms.