Trouble with DNS Resolvers sticking to single IP address


If anyone wants to try out 1.8-dev2 dns resolvers, I have setup a docker demo:

You would probably need to reconfigure the hard coded IP addresses. Will get around to making it env based soon.


Asking @willy to release 1.8-dev3, so that all those changes can be easier tested in the field.


Asking myself the same question.

From the Docker Docs:

To bypass the routing mesh, you can start a service using DNS Round Robin (DNSRR) mode, by setting the --endpoint-mode flag to dnsrr. You must run your own load balancer in front of the service. A DNS query for the service name on the Docker host returns a list of IP addresses for the nodes running the service. Configure your load balancer to consume this list and balance the traffic across the nodes.

How do I actually do that with HAProxy?


Well, either you set multiple server line with the same name:
backend myapp


server s1 check resolvers mydns

server s2 check resolvers mydns

server s3 check resolvers mydns

Or you use server-template directive:

backend myapp


server s 3 check resolvers mydns

Adjust the number of servers to your need.

In each case, the resolvers should turn on one server per IP found in the response.


Baptiste, I tried to use multiple servers but the requests are not balanced evenly between the servers.

I built an example using Docker and docker-compose. The backend servers count each request they receive and print the number of requests received after 60 seconds.


The output is

api_4      | f8e1414ea551 0
api_1      | 860eb040651c 0
api_2      | 6af96d901ea8 179
api_5      | 1f15abd0d461 60
api_3      | 271ae04ff5cc 60

One API server receives 180 requests, two receive 60 each and another two receive 0.

Is it possible to round robin to servers which were resolved with DNS?


Did you start 5 instances of ‘api’ service?


Yes. Five instances of the API service using.

docker-compose up --scale api=5



You’re missing the “resolvers docker” statement on your server-template line.

With this enabled, I have the following result:

docker-compose up --scale api=5

Starting debug_api_1 …

Starting debug_api_1 … done

Starting debug_api_2 … done

Starting debug_api_3 … done

Starting debug_api_4 … done

Starting debug_api_5 … done

Attaching to debug_haproxy_1, debug_api_1, debug_api_2, debug_api_3, debug_api_4, debug_api_5

api_3 | cd51492adab9 63

api_4 | f5532b40ea80 61

debug_api_3 exited with code 0

debug_api_4 exited with code 0

api_2 | a81602129221 61

api_1 | f8202d903d1b 61

debug_api_2 exited with code 0

debug_api_1 exited with code 0

api_5 | 02f56990bbd4 62

debug_api_5 exited with code 0


You are correct @Baptiste. Apologies!


Thank you so much @z0mb1ek and @baptiste for explaining this I never would have guessed I need multiple server lines!

I’ve spent all weekend trying to figure out why haproxy wouldn’t load balance my docker service even though simple curls show it cycling among the different ip’s that my service name resolves to.

In my case I don’t know how large my service will be scaled. How do I know how many duplicate service lines to use? Is there any downside to using whatever maximum I expect?

What is the upcoming better way to do this is there a github issue I can follow or blog post explaining it?