benobi
September 4, 2022, 9:19pm
1
I’ve read a lot of posts and docs about this… I’m still unable to get the CF-Connecting-IP in my haproxy access logs.
# Cloudflare origin IP
acl from_cf src -f /etc/haproxy/cf-ips/CF_ips.lst
acl cf_ip_hdr req.hdr(CF-Connecting-IP) -m found
http-request set-header X-Forwarded-For %[req.hdr(CF-Connecting-IP)] if from_cf cf_ip_hdr
I am using the above on the frontend to get the CF-Connecting-IP
- but my haproxy access logs still show all CF IPs instead of origin IPs. Is that expected?
haproxy.cfg
haproxy -vv
Thanks in advance!
Setting the IP address in the X-Forwarded-For
does just that. Use http-request set-src
to set the src-ip at lower levels.
In versions older than 2.5, workarounds will are required:
opened 05:12PM - 02 May 19 UTC
closed 04:00PM - 05 Nov 21 UTC
type: bug
1.6
1.7
1.8
severity: medium
subsystem: http
status: fixed
2.0
2.2
2.3
2.4
Michael Brown (@supermathie) reported this, further researched it and provided a… workaround on discourse:
https://discourse.haproxy.org/t/cloudflare-haproxy-is-using-the-wrong-ip-for-http-requests-after-the-first/3769
## Output of `haproxy -vv` and `uname -a`
```
lukas@dev:~/haproxy$ ./haproxy -vv
HA-Proxy version 2.0-dev2-a48237-261 2019/05/02 - https://haproxy.org/
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
OPTIONS = USE_GETADDRINFO=1
Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO -OPENSSL -LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 -ZLIB -SLZ +CPU_AFFINITY -TFO -NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER +PRCTL
Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with multi-threading support (MAX_THREADS=64, default=2).
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
h2 : mode=HTX side=FE|BE
h2 : mode=HTTP side=FE
<default> : mode=HTX side=FE|BE
<default> : mode=TCP|HTTP side=FE|BE
Available services : none
Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
lukas@dev:~/haproxy$ uname -a
Linux dev 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
lukas@dev:~/haproxy$
```
Not a regression though, from the initial commit 2fbcafc9ce 4 years ago until latest HEAD the behavior is the same.
## What's the configuration?
```
global
maxconn 100
log 10.0.0.4 syslog debug
defaults
mode http
option httplog
log global
timeout connect 10s
timeout client 120s
timeout server 120s
option http-keep-alive
frontend myfrontend
bind :80
http-request set-src hdr(x-forwarded-for) #if { src 10.0.0.4 }
default_backend be_ip
backend be_ip
server www 10.0.0.33:80
```
## Steps to reproduce the behavior
In a keep-alived client connection, issue multiple subsequent requests with different x-forwarded-for values (certain CDNs or if another frontend haproxy layer has http-reuse enabled)
1. Opening a telnet window to haproxy and paste something like this:
```
GET / HTTP/1.1
Host: localhost
X-Forwarded-For: 192.168.0.1
GET / HTTP/1.1
Host: localhost
X-Forwarded-For: 192.168.0.2
GET / HTTP/1.1
Host: localhost
X-Forwarded-For: 192.168.0.3
GET / HTTP/1.1
Host: localhost
```
## Actual behavior 1
When set-src is unconditionally called (no src based ACL), the first three request will show up fine in haproxy logs. The forth request will keep the src from the last set-src call (`192.168.0.3`)
## Actual behavior 2
When set-src is conditioned by an ACL (only trust X-Forwarded-For from known trusted proxies or CDNs), then the behavior is worse, because after the first set-src call (`192.168.0.1`), the ACL stops matching, because we replaced the actual low level connection IP.
## Expected behavior
Expected behavior would be to only set the source IP for this specific request/transaction, `http-request` implies exactly that, without impacting subsequent transactions on the same connection.
## Do you have any idea what may have caused this?
set-src works on the connection layer.
## Do you have an idea how to solve the issue?
Either we are able to fix the underlying technical issue (I assume this is complex), or we document this behavior clearly along with some workarounds (already provided on discourse by Michael).
1 Like
benobi
September 9, 2022, 11:10pm
4
Thank you for the pointer. I have added this to get Cloudflare IPs working correctly in haproxy 2.4 on Ubuntu 22:
# Cloudflare origin IP
acl from_cf src -f /etc/haproxy/cf-ips/CF_ips.lst
acl cf_ip_hdr req.hdr(CF-Connecting-IP) -m found
http-request set-header X-Forwarded-For %[req.hdr(CF-Connecting-IP)] if from_cf cf_ip_hdr
http-request set-src hdr(x-forwarded-for) if from_cf
http-request set-var(sess.cloudflare) always_true if { http_first_req } from_cf
Looks like haproxy 2.5 isn’t available as LTS until Q1 2023.
No, 2.5 is EOL in Q1 2023 it will never be an LTS.
Just use 2.6 which is LTS and supported until Q2 2027.
This is an incomplete configuration and will trigger the bug I linked above.
Use the complete workaround so that you don’t hit any bugs:
opened 05:12PM - 02 May 19 UTC
closed 04:00PM - 05 Nov 21 UTC
type: bug
1.6
1.7
1.8
severity: medium
subsystem: http
status: fixed
2.0
2.2
2.3
2.4
Michael Brown (@supermathie) reported this, further researched it and provided a… workaround on discourse:
https://discourse.haproxy.org/t/cloudflare-haproxy-is-using-the-wrong-ip-for-http-requests-after-the-first/3769
## Output of `haproxy -vv` and `uname -a`
```
lukas@dev:~/haproxy$ ./haproxy -vv
HA-Proxy version 2.0-dev2-a48237-261 2019/05/02 - https://haproxy.org/
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
OPTIONS = USE_GETADDRINFO=1
Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO -OPENSSL -LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 -ZLIB -SLZ +CPU_AFFINITY -TFO -NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER +PRCTL
Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with multi-threading support (MAX_THREADS=64, default=2).
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
h2 : mode=HTX side=FE|BE
h2 : mode=HTTP side=FE
<default> : mode=HTX side=FE|BE
<default> : mode=TCP|HTTP side=FE|BE
Available services : none
Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
lukas@dev:~/haproxy$ uname -a
Linux dev 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
lukas@dev:~/haproxy$
```
Not a regression though, from the initial commit 2fbcafc9ce 4 years ago until latest HEAD the behavior is the same.
## What's the configuration?
```
global
maxconn 100
log 10.0.0.4 syslog debug
defaults
mode http
option httplog
log global
timeout connect 10s
timeout client 120s
timeout server 120s
option http-keep-alive
frontend myfrontend
bind :80
http-request set-src hdr(x-forwarded-for) #if { src 10.0.0.4 }
default_backend be_ip
backend be_ip
server www 10.0.0.33:80
```
## Steps to reproduce the behavior
In a keep-alived client connection, issue multiple subsequent requests with different x-forwarded-for values (certain CDNs or if another frontend haproxy layer has http-reuse enabled)
1. Opening a telnet window to haproxy and paste something like this:
```
GET / HTTP/1.1
Host: localhost
X-Forwarded-For: 192.168.0.1
GET / HTTP/1.1
Host: localhost
X-Forwarded-For: 192.168.0.2
GET / HTTP/1.1
Host: localhost
X-Forwarded-For: 192.168.0.3
GET / HTTP/1.1
Host: localhost
```
## Actual behavior 1
When set-src is unconditionally called (no src based ACL), the first three request will show up fine in haproxy logs. The forth request will keep the src from the last set-src call (`192.168.0.3`)
## Actual behavior 2
When set-src is conditioned by an ACL (only trust X-Forwarded-For from known trusted proxies or CDNs), then the behavior is worse, because after the first set-src call (`192.168.0.1`), the ACL stops matching, because we replaced the actual low level connection IP.
## Expected behavior
Expected behavior would be to only set the source IP for this specific request/transaction, `http-request` implies exactly that, without impacting subsequent transactions on the same connection.
## Do you have any idea what may have caused this?
set-src works on the connection layer.
## Do you have an idea how to solve the issue?
Either we are able to fix the underlying technical issue (I assume this is complex), or we document this behavior clearly along with some workarounds (already provided on discourse by Michael).