HAProxy proxy protocol

Hi,

I have the following HAProxy (v2.0.14) setup -

Application A → HAProxy A → HAProxy B → Application B

Application A & B are deployed on separate EC2 instances in AWS, with HAProxy A & B deployed as sidecar proxies for both the applications respectively. Application A is a Java springboot application, and Application B is RabbitMQ v 3.8.x.

Sidecar proxies provide mTLS between the two application endpoints over the network, HAProxy B acting as TLS termination endpoint.

Below listed are HAProxy configurations deployed on both the application EC2 instances -

Application A

frontend rabbitmq_local_service
mode tcp
option tcplog
bind localhost:9000
default_backend rabbitmq_remote_service

backend rabbitmq_remote_service
mode tcp
option tcplog
option tcp-check
server-template SRV 10 send-proxy ssl crt /etc/haproxy/ssl/cert.pem ca-file /etc/haproxy/ssl/ca.pem verify required check resolvers aws fall 2 rise 2 inter 30000

Application B

frontend rabbitmq_ssl_exposed
mode tcp
option tcplog
bind <host name>:9000 accept-proxy ssl crt /etc/haproxy/ssl/cert.pem ca-file /etc/haproxy/ssl/ca.pem verify required
acl cert_from_trusted_client ssl_c_s_dn(CN) -m reg ^app1-.* ^app2-.*
use_backend rabbitmq_local_service if cert_from_trusted_client

default_backend rabbitmq_local_service
mode tcp
option tcplog
option tcpka
server default localhost:5672

With the above setup, I was expecting the actual/source client IP address associated with the EC2 instance hosting Application A to be forwarded (via proxy protocol header) to HAProxy B as part of the AMQP connection initiated by Application A, and that the actual client IP will be logged as part of the client connection information in RabbitMQ log file. This is by the virtue of “send-proxy” and “accept-proxy” directives used on the client and server side HAProxys respectively.

Although, there are no errors reported in either of HAProxy logs on both the sides or the RabbitMQ log, the connection information logged in RabbitMQ log is still indicating 127.0.0.1: as the client IP instead of the actual/source client IP. A question over here is do I also need to enable proxy protocol on RabbitMQ broker? The current understanding is that should not be required (stand to be corrected) ….

Would appreciate if the members on this mailing list can review the above information and highlight gaps, if any, that might be resulting in not getting the expected output.

Thanks in anticipation.

Hi,

the connection information logged in RabbitMQ log is still indicating 127.0.0.1: as the client IP instead of the actual/source client IP. A question over here is do I also need to enable proxy protocol on RabbitMQ broker? The current understanding is that should not be required (stand to be corrected) ….

In my understanding yes, you have to configure RabbitMQ to display the address you want in the log: network source address, or proxy protocol source address. (given the setup is correct and the Application A IP address is correctly transported all along).

Link: Networking and RabbitMQ — RabbitMQ

Yes, @baptiste64 is right, you absolutely need to enable proxy protocol on the last leg.

Please refrain from double posting across multiple support channels.

@baptiste64, @baptiste64 thanks for your feedback, and apologies for double posting; will refrain from it in the future.

Having enabled proxy protocol on RabbitMQ backend, client application A is not able to connect to RabbitMQ (application B), and I see the below errors in RabbitMQ log -

2021-03-29 08:40:31.705 [error] <0.774.0> error when receiving proxy header: ‘The PROXY protocol header signature was not recognized. (PP 2.1, PP 2.2)’
2021-03-29 08:40:31.714 [error] <0.777.0> error when receiving proxy header: ‘The PROXY protocol header signature was not recognized. (PP 2.1, PP 2.2)’
2021-03-29 08:40:32.056 [error] <0.781.0> error when receiving proxy header: TCP socket was closed prematurely
2021-03-29 08:40:32.644 [error] <0.785.0> error when receiving proxy header: TCP socket was closed prematurely

Having run a tcpdump for HAProxy frontend port (9000) on application B i.e. RabbitMQ node, I see the following -

09:00:55.962827 “src iP”.35410 > “dest IP”.serverviewdbms: Flags [P.], seq 1:52, ack 1, win 211, options [nop,nop,TS val 2750902025 ecr 235343728], length 5
1
E…g…@.@.H.
.a3
.e…R#…z.D…2…
…o …pPROXY TCP4 “src IP” “dest IP” 35410 9000

09:00:55.962846 IP “src IP”.35410 > “dest IP”.serverviewdbms: Flags [P.], seq 52:187, ack 1, win 211, options [nop,nop,TS val 2750902025 ecr 235343728], length
135
E…@.@.H]
.a3

09:01:25.971836 IP “src IP”.35462 > “dest IP”>.serverviewdbms: Flags [P.], seq 1:52, ack 1, win 211, options [nop,nop,TS val 2750932034 ecr 235373737], length 5
1
E…gs.@.@…Z
.a3
.e…#.Oy…O…z…
…B…PROXY TCP4 “src IP” “dest IP” 35462 9000

09:01:25.971862 IP “src IP”.35462 > “dest IP”.serverviewdbms: Flags [P.], seq 52:219, ack 1, win 211, options [nop,nop,TS val 2750932034 ecr 235373737], length
167
E…s.@.@…
.a3

09:01:27.384586 IP “src IP”.35468 > “dest IP”.serverviewdbms: Flags [P.], seq 1:44, ack 1, win 211, options [nop,nop,TS val 2750933447 ecr 235375150], length 4
3
E…W9@.@…&
.a3
.e…#.
…Wo…
…PROXY TCP4 127.0.0.1 127.0.0.1 58592 9000

Stopped HAProxy service on RabbitMQ node (application B), ran tcpdump command “tcpdump -n port 9000 -A | tee ascii-traffic-accept-proxy.log”, and started HAProxy service on RabbitMQ node. This resulted in the AMQP connection request initiated by client application A to be captured as part of the tcpdump, based on the understanding that the proxy protocol header will get added as part of the initial connection request only as AMQP connections are long-lived.

Looks like the proxy protocol header is getting added to the connection stream with the correct source and destination IP addresses, but at the end rather than at the start. Will this cause issues, and result in the behaviour mentioned above? Also, I see a proxy header with source address set to 127.0.0.1 IP address; not sure why?

The above test was run by modifying the HAProxy backend configuration on client application A to communicate with a single RabbitMQ node (application B) instead of using the SRV record, for simplicity of running the tests.

Not sure whether I also need to add send-proxy directive to HAProxy backend server line on RabbitMQ node (application B)? Having tested it with and without accept-proxy directive specified in HAProxy frontend, following are the results -

with accept-proxy
2021-03-29 09:04:34.414 [info] <0.766.0> Connection <0.766.0> (127.0.0.1:58736 → 127.0.0.1:5672) has a client-provided name: rabbitConnectionFactory#5b4da05e:7560
2021-03-29 09:04:34.416 [info] <0.766.0> connection <0.766.0> (127.0.0.1:58736 → 127.0.0.1:5672 - rabbitConnectionFactory#5b4da05e:7560): user ‘user’ authenticated and granted access to vhost ‘/’
2021-03-29 09:04:34.479 [error] <0.790.0> error when receiving proxy header: TCP socket was closed prematurely
2021-03-29 09:04:34.550 [error] <0.793.0> error when receiving proxy header: TCP socket was closed prematurely

client application A was able to connect to RabbitMQ, but the source client IP address was still not getting relayed. Also there were constant errors logged to RabbitMQ log file as seen in the above extract.

without accept-proxy
2021-03-29 09:06:54.025 [info] <0.2320.0> Connection <0.2320.0> (“src IP”:35692 → 127.0.0.1:5672) has a client-provided name: rabbitConnectionFactory#5b4da05e:7561
2021-03-29 09:06:54.028 [info] <0.2320.0> connection <0.2320.0> (“src IP”:35692 → 127.0.0.1:5672 - rabbitConnectionFactory#5b4da05e:7561): user ‘user’ authenticated and granted access to vhost ‘/’
2021-03-29 09:06:54.105 [error] <0.2347.0> error when receiving proxy header: TCP socket was closed prematurely
2021-03-29 09:06:54.245 [error] <0.2350.0> error when receiving proxy header: TCP socket was closed prematurely

client application A was able to connect to RabbitMQ, and the source client IP address was getting relayed. But there were constant errors logged to RabbitMQ log file as seen in the above extract.

Below is the output of haproxy -vv -

HA-Proxy version 2.0.14 2020/04/02 - https://haproxy.org/
Build options :
TARGET = linux-glibc
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE -PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL -LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity(“identity”), deflate(“deflate”), raw-deflate(“deflate”), gzip(“gzip”)
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as cannot be specified using ‘proto’ keyword)
h2 : mode=HTX side=FE|BE mux=H2
h2 : mode=HTTP side=FE mux=H2
: mode=HTX side=FE|BE mux=H1
: mode=TCP|HTTP side=FE|BE mux=PASS

Available services : none

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace

I’m not sure you actually “simplified” the configuration, by removing haproxy on application B. That’s a major change in your configuration, which doesn’t make a lot of sense if you want to encrypt the traffic between the two, and will certainly require that you completely remove SSL encryption on the backend section of haproxy running on the application A side.

Do you want two haproxy instances passing traffic through an encrypted SSL tunnel?

If yes, restore the original configuration, and then put “send-proxy” on all backend servers on both A and B side, and accept-proxy on all frontend bind statements on both A and B side. This will make it so that the proxy header is send to RabbitMQ.

If you don’t want to encrypt the traffic via SSL tunnel, then drop haproxy on the B side, but also remove the SSL encryption, because there is no haproxy layer anymore that decrypts the traffic on the B side. In this case also, send-proxy on the backend server and accept-proxy on the frontend bind statement.

The PROXY header belongs at the beginning of the IP packet payload, that doesn’t mean it’s the first thing in the tcpdump capture. Both Ethernet and IP headers come first.

@lukastribus, thanks for your reply.

Ah… I see where the confusion is; apologies for that. It is due to my earlier comments -

The above test was run by modifying the HAProxy backend configuration on client application A to communicate with a single RabbitMQ node (application B) instead of using the SRV record, for simplicity of running the tests.

What I meant with the above was that I replaced the SRV record from HAProxy backend on application A to use a single RabbitMQ node endpoint in the form of HAProxy frontend (TLS endpoint) configured on the particular RabbitMQ node.

So the configuration that the above results are based of is -

  1. Application A → HAProxy A → HAProxy B → Application B
  2. encrypted traffic flowing through HAProxy A & HAProxy B
  3. accept-proxy directive specified for HAProxy B frontend bind line and send-proxy directive specified for both HAProxy backends (although one test carried out was by removing accept-proxy from the frontend bind line for HAProxy B, which seem to have relayed the client/source IP to RabbitMQ backend log).

@lukastribus just realised one discrepancy in the above comment.

Actually have not specified accept-proxy on the frontend bind line for HAProxy running on application A side. Why will this be required?

HAProxy A frontend will accept request on localhost:9000 from application A. What would adding accept-proxy directive to that frontend achieve? Is application A expected to send a proxy header as part of the request?

My understanding about adding send-proxy directive in HAProxy backend for application A is that it will result in HAProxy service adding proxy protocol header (containing the client details such as source IP etc.) to the request initiated by application A before routing it to HAProxy B. Is this understanding not correct?

I agree that accept-proxy will still be required as part of HAProxy B frontend bind line .

You’re right about that, accept-proxy does NOT belong on haproxy running on application A, I got confused there.

But I’m not sure what you expect to see, other than 127.0.0.1: haproxy on application A listens on localhost, so what foreign IP address other than 127.0.0.1 do you expect?

If the Java application connects to haproxy to 127.0.0.1 from 127.0.0.1, 127.0.0.1 is you will see in the proxy protocol headers.

Whatever the case may be, the configuration proxy protocol needs to be enabled between haproxy A → haproxy B → RabbitMQ.

Are you saying the application actually works?

Perhaps you have other traffic hitting this RabbitMQ port, that cannot send the proxy protocol? Health checks perhaps or other, local RabbitMQ clients? Those will all break when you enable the proxy protocol on the existing port.

Thanks for your reply. As seen in the output of tcpdump run on HAProxy B frontend port, I do see proxy protocol header with the correct source client IP address i.e. private IP address assigned to the EC2 instance hosting application A, as mentioned in my earlier reply. Including the same below again -

“src IP” and “dest IP” in the above tcpdump include the actual private IP addresses associated with the EC2 instances hosting application A & B respectively.

This answers my query in relation to the location/position of proxy header in the tcpdump. Thanks.

What is not clear though is I also see a proxy header (last entry in tcpdump output above) with source IP address set to 127.0.0.1; not quite sure why?
Is this down to the use of HAProxy A for direct client access (via 127.0.0.1), as well as a reverse proxy as mentioned in “option forwardfor” section in HAProxy configuration manual? In which case will it make sense to use an equivalent of “except for” keyword for TCP connections, if one is available, to disable addition of the header for a known source address such as 127.0.0.1?

Yes, application A is able to connect to RabbitMQ service via HAProxy running on the particular RabbitMQ node. I can see a successful connection in RabbitMQ application log, extract from which is as seen below -

But, what I’m trying to understand is how is the actual client IP address (private IP address) getting relayed over to RabbitMQ backend service with no accept-proxy directive added to HAProxy B frontend bind line on the RabbitMQ node, whereas 127.0.0.1 getting relayed with accept-proxy added? Or is that an expected behaviour?

I don’t think that is the case, but will check the test setup again and confirm. Thanks for the pointers.

No, all of those request should have 127.0.0.1, as opposed to none of them. You are connecting from your Java application to 127.0.0.1, so the source IP should also be 127.0.0.1, unless the java application somehow binds the IP to the private IP address.

I asked about this, but you are arguing that that is what you actually want; the problem is that if you don’t know why it partially works, you can’t know why it partially fails, whatever your definition of failure or success may be.

I don’t understand what you are trying to say.

None of this has anything to do with option forwardfor. Haproxy listens on a TCP socket, sends the TCP payload to a backend server, encrypting it with SSL and putting the proxy header on it. On the frontend a haproxy instance may accept the proxy header or not. That’s it, there is no rocket-science behind the curtain.

You need to stop changing knobs and instead break the puzzle down, capture traffic of the same connection on all 4 points, capture netstat output of those same connections everywhere, and log all the connection on both haproxy instance with option tcplog and ideally rabbitmq/java client and then carefully analyze them.

Only when you compare and analyze all of this data can you see what happens where.

@lukastribus, thanks for your reply.

I am not stating what I want or expect, I’m just trying to share the facts as I see in the tcpdump output, and I acknowledge there may be gaps in my understanding which I’m trying to plug.

I do see the private IP addresses for source and destination added to the proxy header along with, as I mentioned earlier, a proxy header entry with 127.0.0.1 as well. I’m trying to understand the reason behind multiple proxy headers with different source/destination IP addresses.

The java application itself listens on 0.0.0.0 address on a specific port. So although the java application communicates with HAProxy A on localhost:, is it not correct to expect that the actual source client IP address that JAVA application is listening on should get added as part of proxy protocol header by HAProxy A, instead of 127.0.0.1, and it seem to be doing so but also includes 127.0.0.1?

One thing I failed to share earlier, and that is HAProxy A also listens on private IP address associated with the EC2 instance hosting java application, but that is in the context of a SSL exposed frontend on a different port accepting HTTP requests from a partner client application (via it’s own HAProxy) and send it over to the Java application backend.
So HAProxy A has two frontends one SSL exposed HTTP frontend to accept and terminate incoming SSL request and forward it to java application backend, and second (non-SSL) TCP frontend bind on localhost to route request via a separate backend (SSL enabled) listed in my original post to RabbitMQ service via HAProxy B. In this thread we are discussing the later. Nevertheless, I know I should have shared this information earlier, but was so bogged down with the communication details between java application and RabbitMQ that I forgot to mention it.

What I was trying to say was -

With HTTP requests one can use “option forwardfor” to instruct HAProxy to add “X-Forward-For” HTTP header to include the actual/source client IP address to HTTP request. And there is an additional directive “except for” available to disable addition of HTTP header for a known source client IP address such as 127.0.0.1.

Obviously the above stated options cannot be used with TCP requests. Instead “send-proxy” is an alternative available to add proxy header for TCP requests, but am not aware about an equivalent alternative for “except for” keyword to exclude or disable adding proxy header for know source IP address such as 127.0.0.1.
And if a similar option was available for TCP requests whether it can be used to exclude proxy header entry with 127.0.0.1 being added by HAProxy A, so that only the proxy header entry with the actual source client IP address (which I see in the tcpdump output) will get relayed over to HAProxy B. Happy to be corrected if this is not how it is supposed to work from HAProxy standpoint.

Correct. This, as per my understanding, will depend upon the use of accept-proxy on HAProxy B frontend. What I’m trying to state is with accept-proxy added I still see 127.0.0.1 as the client IP address relayed over to RabbitMQ backend, and this will be due to it being included as part of proxy header. But without accept-proxy added, I see the actual source client IP address relayed over to RabbitMQ backend, and this is what I’m trying to understand as to how it works from HAProxy B standpoint?

I agree, and am already doing what you are suggesting. Have already captured tcpdump on both HAProxy A backend port and HAPoxy B frontend port, and I see the proxy header info that I have already shared.

But as per my understanding, and you can say limited, based on the factual data that I have shared so far it seems there are still some discrepancies that I’m trying to get my head around, hence trying a few tests to share as much data as possible to aid in analysing this case.