Trouble with DNS Resolvers sticking to single IP address

A bit of context to start with. I have an HAProxy set up as a public facing end point for our AWS services. The HAProxy forwards requests to an internal AWS ELB (Elastic Load Balancer). As for why have HAProxy in front of an ELB, long story and off topic (ELBs don’t support percentage canaries). :slightly_smiling:

The way that AWS ELBs work at a high level is they supply multiple IP addresses through DNS for a given host name. Clients would normally round robin through these. Clients DO need to resolve the hostname to an IP address fairly regularly, as the ELB will scale and move to different IP addresses as part of normal operation. We previously had issues with this and HAProxy 1.5 as the IP address would be locked in after HAProxy started. We switched to 1.6.2 and leveraged the new DNS resolvers section and this problem has gone away. This is how I know my configuration is at least talking to the DNS correctly and updating when the ELB nodes move.

Now, my problem. Through monitoring in AWS it’s evident that my HAProxy instance is still being somewhat sticky about the IP address. It seems like it routes all traffic to 1 of the 2 IP addresses that come back from the DNS. If that IP address becomes unavailable, it will gladly update and use a new valid IP address, but in the meanwhile it seems to stick to just 1 of the available IP addresses. The result is that it’s not spreading out load to the ELB nodes like we need it to.

I tried adding the “hold valid 10s” entry to my resolvers section, but this didn’t seem to fix the problem. Is there something wrong with my configuration or am I misunderstanding how resolution occurs? (current configuration below along with dig results for the hostname)

Thanks for any help or pointers!
-Todd Feak


This is just parts of the config that seem relevant. Let me know if you need more.

resolvers vpcdns
nameserver dns1 172.16.0.2:53
hold valid 10s

frontend https-in
bind *:443
mode tcp
option tcplog
default_backend https-out

backend https-out
balance roundrobin
mode tcp
option httpchk GET /healthcheck
default-server fall 3 rise 5 inter 5s fastinter 1s

server main-lb elb.hostname:443 check resolvers vpcdns port 8080 weight 100
server canary-lb canary-elb.hostname:443 check resolvers vpcdns port 8080 weight 0

Dig results for the hostname

;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22646
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;elb.hostname. IN A

;; ANSWER SECTION:
elb.hostname. 60 IN A 172.16.1.21
elb.hostname. 60 IN A 172.16.2.70

;; Query time: 9 msec
;; SERVER: 172.16.0.2#53(172.16.0.2)
;; WHEN: Fri Feb 26 19:27:05 2016
;; MSG SIZE rcvd: 103

Hi rfeak -

You’re running into a limitation that I’ve experienced myself: HAProxy’s DNS resolution takes the first address in a DNS response, and sticks to it until it’s invalid. I’ve raised questions privately with the HAProxy devs about changing that behavior, but for now it’s not on the roadmap.

I don’t have a great answer for you. At my company, we have a nagios check that alerts us that a new set of IP addresses have become available, and we then fix the HAProxy configs and reload. It’s not super great.

Best of luck -

Andrew

Hey guys,

This is there for a simple reason: current DNS resolution in HAProxy is purposely made for High-availability and follow-up server’s IP address when changed.

Once we launched this feature, we saw this new need from the community: having “volatile” backends where we should be able to add servers based on the number of IPs returned by a DNS resolution.
This is not as trivial as what you may think, because of HAProxy’s internal design. We’re still thinking about ways to do it… But for sure, we’ll have such type of feature at some point.

1 Like

Hi All,

Great product - thank you. I’m using v1.6.2.

I’m posting to provide some further information and context around the impact of this behaviour in some scenarios. Note, I’m not concerned with populating backend servers based on the number of IPs in a response, just the current handling for a single server.

In my case I’m using Docker containers with a custom HAProxy Alpine-based image, which provides the ‘front door’ to a 40 odd container stack (x3 different environments). We’re using Rancher to orchestrate the whole thing. If a backend server fails in some way, Rancher spins up a replacement, updates DNS and HAProxy catches the change. The backends all consist of a single ‘server’ which is actually a Rancher service comprised of many containers. All good, except for two things;

  1. Rancher round-robin’s it’s DNS responses to spread the load over a service’s containers, HAProxy ignores this and won’t switch IPs as long as the one originally resolved is unless that original IP is no longer in the DNS response somewhere. I appreciate this may be for reasons of stability and that’s OK in my case (even though I don’t need persistence) but this seems to lead to the next issue.

  2. If a resolved server fails L4 checks with connection refused and is marked down, the other IP address(es) in the response are still ignored. Thus, I have a down server and a valid alternative and yet my backend and thus service is down. DNS Resolution at this point completely stops it would seem (I see no requests). That means that even if I intervene manually and restart the backend service and its containers get new IPs, HAP is still trying to talk to the old address.

If you feel there’s anything here that might be resolvable now, do let me know. Many thanks.

This behavior is by design and we did not designed it for this particular use case because we were not aware of it.
That said, we saw many requests like yours recently and we’re thinking about a way to make your life easier. More information soon, through the mailing list.

1 Like

@sjiveson - I basically have the same setup, with Rancher. I’ve achieved what I think boils down to an answer for point #2 on your list, though not yet #1. I’d be interested your thoughts on https://gist.github.com/bradjones1/ed6d01e311e89b6383e4b1625e72c64a which basically boils down to relying on rapid recognition of DNS changes with some adjustments to rise / fall and other config options. I don’t think it’s perfect yet, but close in my testing. Haven’t pushed it to production yet. It’s used in conjunction with a local, non-caching dnsmasq daemon.

Thanks Baptiste, much appreciated.

Hey @bradj - thank you. I’m not sure how your tweaks would help to be frank but clearly you think you’ve (almost) found the answer and I’ve not considered timers in much detail thus far. Could you perhaps explain in more detail why you think (and which of) your changes will help? My configuration isn’t too far off what you have in that gist. Cheers

@Baptiste How are we looking with this feature? We’ve got a docker infrastructure where we want to use haproxy and have containers dynamically scaled. I have the following config

resolvers dns
  nameserver public-0  127.0.0.11:53
  hold valid 1ms

frontend http
  bind *:80
  default_backend site-backend

backend site-backend
  balance roundrobin
  server site api:80 resolvers dns check inter 1000

But the instances are not roundrobin’d. Is there an issue I can track/help out with or some sort of roadmap for getting this working?

Hi @herecydev,

What you want to achieve is under development in 1.8.
It’s a mix of different features, including improvement in the DNS resolver combined to the server-template new feature.
Stay tuned :slight_smile:

Baptiste

Hi @Baptiste , any news about it? Can we test it in 1.8-dev2?

Hi @z0mb1ek

Yes, please, give a try to 1.8dev2. The responses are now stored in a local cache and each time a record is consumed, it’s moved to the back of the list:
http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=8ea0bcc911809e77560bdd937c02a0b832526ef7

Note that it’s a bit limited in some use-cases:

  • does not look for records in other IP family (under dev)
  • when adding a new IP in the response, only 1 server will use it (we don’t change many servers at the mean time unless there is a good reason for it)

Baptiste

when adding a new IP in the response, only 1 server will use it (we don’t change many servers at the mean time unless there is a good reason for it)

what does it mean?

Let’s take a backend like the one below:
backend my_app server s1 myapp.domain.com:80 resolvers mydns server s2 myapp.domain.com:80 resolvers mydns server s3 myapp.domain.com:80 resolvers mydns server s4 myapp.domain.com:80 resolvers mydns server s5 myapp.domain.com:80 resolvers mydns server s6 myapp.domain.com:80 resolvers mydns

if myapp.domain.com returns 10.0.0.1 and 10.0.0.2, then 3 servers will be affected to each IP.
Now, if you add one more record to myapp.domain.com (10.0.0.1, 10.0.0.2 and 10.0.0.3), then only 1 server will pick up this third IP address.

We can’t do better for now, but we’ll work on improving this situation.

if myapp.domain.com returns 10.0.0.1 and 10.0.0.2, then 3 servers will be affected to each IP.

s1,s2,s3 for 10.0.0.1 and s4,s5,s6 for 10.0.0.2? And then s1,s2,s3-10.0.0.1; s4,s5-10.0.0.2 and s6-10.0.0.3 this way?

Thus if a have two ips for one dns record I need two same server record like this?

backend my_app
server s1 myapp.domain.com:80 resolvers mydns
server s2 myapp.domain.com:80 resolvers mydns

or can only have one record?

@Baptiste can you answer please?)

you need 2 lines. Actually, you need the number you expect to have.
Again, we’re working on improving this situation. Stay tuned.
(I mean that soon, the number of UP server will match the number of records in the DNS response, without any duplication unless you allow, for backward compatibility).

Baptiste

@Baptiste big thx.

Can I use dev version in production?

Hello list,

This are quite exciting news and I am trying to give it at shot with 1.8-dev2

So far i have created a resolvers entry on my haproxy.cfg and some basic frontend/backend

global
    log /dev/log daemon
    log /dev/log daemon notice
    maxconn 100

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

listen stats
    bind *:1936
    stats enable
    stats hide-version
    stats refresh 5s
    stats show-node
    stats realm HAProxy\ Statistics
    stats uri /
    http-request set-log-level silent

resolvers dns-consul
  nameserver dns1 127.0.0.1:8600
  resolve_retries 3
  hold valid 100ms

frontend http
  bind *:80
  default_backend site-backend

backend site-backend
  balance roundrobin
  server site frontend-api-frontend.service.dc1.consul check resolvers dns-consul resolve-prefer ipv

I can see some messages of DNS requests in Consul (my dns srv source)

    2017/10/04 12:44:04 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (139.981µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:04 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (107.348µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:05 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (121.594µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:05 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (57.816µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:06 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (124.103µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:06 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (78.518µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:07 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (128.503µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:07 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (109.668µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:08 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (190.55µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:08 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (113.934µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:09 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (123.446µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:09 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (91.523µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:09 [DEBUG] http: Request GET /v1/agent/self (568.817µs) from=127.0.0.1:53393
    2017/10/04 12:44:10 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (127.082µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:10 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (108.02µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:11 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 28 1} (133.022µs) from client 127.0.0.1:63649 (udp)
    2017/10/04 12:44:11 [DEBUG] dns: request for {frontend-api-frontend.service.dc1.consul. 1 1} (97.165µs) from client 127.0.0.1:63649 (udp)

I’ve also verified that I see the list of servers on my dns request.

dig @127.0.0.1 -p 8600 frontend-api-frontend.service.consul SRV

; <<>> DiG 9.11.2 <<>> @127.0.0.1 -p 8600 frontend-api-frontend.service.consul SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 750
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 3
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;frontend-api-frontend.service.consul. IN SRV

;; ANSWER SECTION:
frontend-api-frontend.service.consul. 0	IN SRV	1 1 21587 7f000001.addr.dc1.consul.
frontend-api-frontend.service.consul. 0	IN SRV	1 1 22242 7f000001.addr.dc1.consul.

;; ADDITIONAL SECTION:
7f000001.addr.dc1.consul. 0	IN	A	127.0.0.1
7f000001.addr.dc1.consul. 0	IN	A	127.0.0.1

;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Wed Oct 04 12:44:55 CEST 2017
;; MSG SIZE  rcvd: 155

But i can not get the haproxy backend to be dynamic. Not sure what I am doing wrong and how to debug the haproxy itself.

So far starting it like this:

haproxy -f haproxy.cfg -d -V

But not errors messages or hints have spit into stdout.
Any helps or hints are appreciated.

Thanks in advance

Hi,

You must use the latest -dev from git (code has been commited after dev2
has been released).
Second, your request is not sent as SRV since it does not follow-up the
RFC. The “fqdn” must be ...
Consul accepts the only (as kubernetes does), but it shouldn’t.
Last, you should use the srv-template directive to provision X server with
the same configuration.
It would be something like this:

server-template red 20 _http._tcp.red.default.svc.cluster.local:8080
inter 1s resolvers kube resolve-prefer ipv4 check

Baptiste