Do-resolve not working

Hello everyone,

I have an issue where I configured resolvers and have this line in my configuration:

tcp-request content do-resolve(sess.dstIP,mydns,ipv4) ssl_fc_sni

and it doesn’t work, in the logs the variable sess.dstIP shows as -
And I also added printing of the ssl_fc_sni in the logs to check its value and its the expected domain.


resolvers mydns
    accepted_payload_size 8192
    nameserver dns1
    resolve_retries      3
    timeout resolve      10s
    timeout retry        10s
    hold other           30s
    hold refused         30s
    hold nx              30s
    hold timeout         30s
    hold valid           10s
    hold obsolete        30s

frontend ssh_frontend
   bind *:2222 ssl crt /etc/haproxy/certs/ssl.pem
   mode tcp
   log-format "%ci:%cp [%t] SNI:%[ssl_fc_sni] dstName:%[var(sess.dstName)] dstIP:%[var(sess.dstIP)] "
   tcp-request content do-resolve(sess.dstIP,mydns,ipv4) ssl_fc_sni
   tcp-request content set-var(sess.dstName) ssl_fc_sni
   default_backend ssh-all

backend ssh-all
   mode tcp
   tcp-request content set-dst var(sess.dstIP)
   server ssh

The log outputs lines like:

SNI:master-1 dstName:master-1 dstIP:-

I tested this by following the official documentation on how to setup ssh proxy-ing using DNS:

using dig on the haproxy machine returns the expected A record:

dig A @ -p 53 master-1 +short

tcpdump doesn’t show any packets going towards or from host so no DNS query attempt was even made.

haproxy -vv

# haproxy -vv
HA-Proxy version 2.2.13-5f3eb59 2021/04/02 -
Status: long-term supported branch - will stop receiving fixes around Q2 2025.
Known bugs:
Running on: Linux 3.10.0-1160.24.1.el7.x86_64 #1 SMP Thu Apr 8 19:51:47 UTC 2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  DEBUG   = 


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=1).
Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with network namespace support.
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 4.8.5 20150623 (Red Hat 4.8.5-44)

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
            fcgi : mode=HTTP       side=BE        mux=FCGI
       <default> : mode=HTTP       side=FE|BE     mux=H1
              h2 : mode=HTTP       side=FE|BE     mux=H2
       <default> : mode=TCP        side=FE|BE     mux=PASS

Available services : none

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace
	[CACHE] cache
	[FCGI] fcgi-app

uname -a

# uname -a
Linux linux 3.10.0-1160.24.1.el7.x86_64 #1 SMP Thu Apr 8 19:51:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

I ran into something kind of similar. On examing some tcpdumps, I discovered that haproxy was sending malformed DNS query requests. It looked like a possible buffer oveflow, because there was additional data after the domain (in my case part of another HTTP header). I was able to work around it by storing the domain I wanted in a temporary variable and then use that variable instead of the fetch directly.

However, in my case I was using a converter on the fetch. I’m not sure if that made a difference. It was also happening on ubuntu 16.04, but I wasn’t able to reproduce on 20.04. I’m not sure why.

I’m also not sure if that is the same issue that you are seeing.

Try adding a tcp-request inspect-delay before the do-resolve.

I think you will need tcp-request inspect-delay 5s in the frontend config and you should define some acls in the backend.

But to be honest - I would leave the DNS nonsense and set the servers over backend config - for security reasons.