1.8.x Resolvers not working when multiple backend servers share the same hostname \ IP?


I commented on a slightly different issue regarding this but haven’t got a response for a while now so thought it worth creating a new issue with my specific problem to help anyone else.

Basically it seems that the Resolvers functionality doesn’t accommodate for multiple servers in the same backend having the same hostname \ fqdn \ IP but using a different port. E.G.

resolvers serverdns
    nameserver dnsmasq

backend servers
    mode http
    balance roundrobin
    cookie EXTERNALSERVERID insert indirect nocache
    option forwardfor
    option httpchk /health

    server server1 server1.lab.local:10280 cookie server1 check resolvers serverdns ssl ca-file /etc/haproxy/chain.crt
    server server2 server1.lab.local:10281 cookie server2 check resolvers serverdns ssl ca-file /etc/haproxy/chain.crt
    server server3 server1.lab.local:10282 cookie server3 check resolvers serverdns ssl ca-file /etc/haproxy/chain.crt

With this config I always only get one server UP and the other two are in MAINT. The logs just say there’s no IP:

haproxy[32393]: Server servers/server1 is going DOWN for maintenance (No IP for server ). 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

This config works fine on 1.7.x so I was wondering if there was a change required in the config from 1.7 to 1.8? If not is there a way I can get this working? This is blocking us from upgrading to 1.8.


I’m pretty sure @Baptiste did read the other replies in the thread:

Hi @lukastribus. @Baptiste did indeed reply but that thread hasn’t been active for around 3 months so I dunno if it was a separate issue and resolved elsewhere.

To be honest I just wasn’t sure if I was doing anything wrong as having multiple backends on the same hostname but different ports must be pretty common I thought?

Here’s some sanitized config of what I’m doing. Would be good to know if it’s incorrect in anyway.

    log local2 info
    maxconn 2000
    tune.ssl.default-dh-param 2048
    tune.maxrewrite 16384
    tune.bufsize 32768
    user admin
    group admin
    stats socket /etc/haproxy/haproxysock level admin

    log global
    mode http
    compression algo gzip
    compression type text/html text/plain text/javascript application/javascript application/xml text/css image/png font/woff font/woff2
    default-server inter 10s init-addr last,libc,none resolve-prefer ipv4
    option httplog
    option dontlognull
    option redispatch
    option forceclose
    option forwardfor
    retries 3
    timeout connect  5000
    timeout client  160000
    timeout server  160000

resolvers serverdns
    nameserver dnsmasq

frontend em
    bind *:80
    bind *:443 ssl crt /etc/haproxy/server.lab.local.pem

    acl https ssl_fc
    reqadd X-Forwarded-Proto:\ https if https
    redirect scheme https if !https

    default_backend server_backend

backend server_backend
    mode http
    cookie EXTERNALSERVERID insert indirect nocache
    option httpchk /health
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request set-header X-Forwarded-Proto https if { ssl_fc }

    server server1 server.lab.local:8280 cookie server1 check resolvers serverdns ssl ca-file /etc/haproxy/chain.crt
    server server2 server.lab.local:8280 cookie server2 check resolvers serverdns ssl ca-file /etc/haproxy/chain.crt

listen stats
    bind *:1936
    mode http
    timeout client 160000
    timeout connect 5000
    timeout server 160000
    stats uri /
    stats realm HAProxy\ Statistics
    stats admin if TRUE

Hi guys,

This is expected behavior. HAProxy avoids reusing the same IP address.

What is your use case?

Hey @Baptiste

My use case is having several backends on the same IP but a different port. Is that atypical?


Do you mean several server in the same backend but on different port?
Are you delivering the same application on each port?

That’s my understanding of this use-case, yes.

Please see the previous thread:

I believe we should consider “IP:port” unique, not only “IP”.

Yes to both :slight_smile: Several servers of the same application running on one box on different ports for a HA setup.

We are using same method

Example Application-Port configuration

HAProxy 1.8 SRV record gives 3 servers but it should 4 servers

Hi guys,

I think I understand the problem.

We introduced a duplicate IP detection in HAProxy 1.8. The main purpose was to prevent people provisioning 20 servers in HAProxy that would have the same hostname and all of them would be enabled by default.

The main issue I see currently is that each server runs its own resolution, atomically.

I see a few ways to workaround this problem, but I need to study the different options and their impact.

1 Like

Awesome. Many Thanks @Baptiste. What’s the best way to follow this and get updates?


Please find 4 patches attached to this email.

Could you please apply them and tell me if the new behavior is as expected?

(you must enable the new flag resolve-accept-dup-ip on the server line).


Well, download latest code from git, put the patches in the same directory and do a “git am *.patch”, then compile as usual.

I just typically install from the IUS repo. What would be the best way to get the patches in?

Could you send me these patches?

Hi Mustafa,

If you are on the ML, you can find them there: https://www.mail-archive.com/haproxy@formilux.org/msg30461.html

If not, please send me your mail address.

(just in case, I tried to attach them here, hopefully this will work…)

Hey @Baptiste

Managed to get this working today with the latest checkout from github and can confirm that it works like a charm!


Great thanks for the feedback!

I’ll check with maintainers to integrate the patches in mainline.

Hi @Baptiste,
I’m using server-template, this is my configuration

    default-server init-addr none resolvers consul resolve-prefer ipv4
    server-template server 4 _test._tcp.service.consul check resolve-accept-dup-ip

This could not worked. I think, you patched only server. Right?

Hi Mustafa,

Nope, I supposely patch everything, but I might have messed up.

Please note this is not the definitive patch and the option name will change.

I keep you updated asap with a new set of patches.


1 Like