Reverse proxy: Very slow page load

Hello folks,

Overview:
I managed to successfully setup an HAproxy installation for use as a reverse proxy and later load balancer. Technically everything is working but the pages loaded through the proxy are extremely slow (like multiple minutes for a simple Wordpress site).

My setup:

  • HAproxy on FreeBSD 11 64-bit. It’s a root server with a 4-core Xeon 3.3 GHz, 32 GB memory and 1G/1G internet connection
  • Different webservers running FreeBSD 11 64-bit. Those are usually machines with two to four cores and 8 to 16 GB of memory and 1G/1G internet connection.
  • The servers are not physically at the same location. I use OpenVPN to tie them into a private network. The ping between the HAproxy and the web servers are around 20 ms stable.
  • OpenVPN runs in UDP mode. Everything is pretty much default config.
  • All involved servers have tons of free resources left and are not busy at all. The HAproxy server isn’t doing anything other than running HAproxy and acting as the OpenVPN server.

My problem:
I tried to reverse-proxy three different existing websites through the new HAproxy machine. When I access the website through the web servers public IP they load within less than a second. When I load then through the HAproxy machine they take up to 11 minutes to complete loading.
Here’s an example of a Wordpress side being loaded through HAproxy:

  • Chrome console screenshot: paste
  • HAproxy log: paste

I have the same problem with other Wordpress installations, with the Jenkins dashboard and other existing websites.

My config:
Here’s my HAproxy config:

global
        log /var/run/log local0 info
        log /var/run/log local0 notice
        daemon
        maxconn 8000
        tune.ssl.default-dh-param 2048
        user nobody
        group nobody

defaults
        log global
        option httplog
        option dontlognull
        mode http
        timeout connect 5s
        timeout client 1min
        timeout server 1min
        option forwardfor
        errorfile 400 /usr/local/etc/haproxy/errorfiles/400.http
        errorfile 403 /usr/local/etc/haproxy/errorfiles/403.http
        errorfile 408 /usr/local/etc/haproxy/errorfiles/408.http
        errorfile 500 /usr/local/etc/haproxy/errorfiles/500.http
        errorfile 502 /usr/local/etc/haproxy/errorfiles/502.http
        errorfile 503 /usr/local/etc/haproxy/errorfiles/503.http
        errorfile 504 /usr/local/etc/haproxy/errorfiles/504.http

frontend http-in
        bind *:80
        bind *:443 ssl crt /usr/local/etc/haproxy/certs/stuff.pem
        mode http
        use_backend jenkins if { hdr(host) -i jenkins.my.org }
        use_backend blog if { hdr(host) -i blog.my.org }
        default_backend test

backend blog
        mode http
        server blog01 10.8.0.18:80 check
        rspadd Content-Security-Policy:\ upgrade-insecure-requests

backend jenkins
        server jenkins1 10.8.0.14:8180
        mode http
        http-request set-header X-Forwarded-Port %[dst_port]
        http-request add-header X-Forwarded-Proto https if { ssl_fc }
        reqrep ^([^\ :]*)\ /(.*)     \1\ /\2
        acl response-is-redirect res.hdr(Location) -m found
        rspirep ^Location:\ (http)://10.8.0.14:8180/(.*)   Location:\ https://jenkins.my.org:443/\2  if response-is-redirect

The Jenkins backend config has been taken from the official Jenkins & HAproxy example from the official Jenkins documentation.

I’d appreciate any kind of help on this!

Try requesting from the from the haproxy server one of those object (like style.css?ver=4.2) via wget an provide that output please.

I think you may be hitting a MTU issue with your OpenVPN tunnel.

As per OpenVPN man try setting the following settings:
--tun-mtu 1500 --fragment 1300 --mssfix

Alternatively you may try to set the MSS in haproxy (this mas be unsupported on FreeBSD though):
server blog01 10.8.0.18:80 mss 1200 check

I tried to wget the style.css?ver=4.2 as per your suggestion. It starts of with 20kB/s and then drops down to 100B/s within three seconds. The ETA increases to almost 60 minutes.

I couldn’t start HAproxy with the mss 1200 setting in the backend. Apparently it’s an unknown keyword. I assume that’s the “not supported by FreeBSD” thing you were talking about.
After that I used the --tun-mtu 1500 --fragment 1300 --mssfix settings for OpenVPN as you suggested but nothing changed. (I did ensure that the settings were applied and restarted all OpenVPN instances).

One more thing I did was trying everything outside of OpenVPN. What I mean by that is that I use the web server’s public IP address in the HAproxy backend config instead of the VPN IP. The result is exactly the same -> It takes forever to load.

Ok, it’s pretty clear then that there is some severe network issue between this box and the origin server, unrelated to haproxy or openvpn.

Try using ping with various packet sizes and the don’t fragment bit set (FreeBSD):
ping -D -s 1472 <destination>

I already did that before I posted here. I couldn’t find anything particularly wrong:

root@hostname:~ # ping -D -s 1472 <public ip>
PING <public ip> (<public ip>): 1472 data bytes
1480 bytes from <public ip>: icmp_seq=0 ttl=57 time=31.586 ms
1480 bytes from <public ip>: icmp_seq=1 ttl=57 time=22.343 ms
1480 bytes from <public ip>: icmp_seq=2 ttl=57 time=26.206 ms
1480 bytes from <public ip>: icmp_seq=3 ttl=57 time=22.407 ms
1480 bytes from <public ip>: icmp_seq=4 ttl=57 time=22.351 ms
1480 bytes from <public ip>: icmp_seq=5 ttl=57 time=22.321 ms
1480 bytes from <public ip>: icmp_seq=6 ttl=57 time=22.736 ms
1480 bytes from <public ip>: icmp_seq=7 ttl=57 time=22.334 ms
1480 bytes from <public ip>: icmp_seq=8 ttl=57 time=22.727 ms
1480 bytes from <public ip>: icmp_seq=9 ttl=57 time=28.737 ms
1480 bytes from <public ip>: icmp_seq=10 ttl=57 time=22.407 ms
1480 bytes from <public ip>: icmp_seq=11 ttl=57 time=22.513 ms
1480 bytes from <public ip>: icmp_seq=12 ttl=57 time=22.311 ms
1480 bytes from <public ip>: icmp_seq=13 ttl=57 time=22.325 ms
1480 bytes from <public ip>: icmp_seq=14 ttl=57 time=22.618 ms
1480 bytes from <public ip>: icmp_seq=15 ttl=57 time=22.300 ms
1480 bytes from <public ip>: icmp_seq=16 ttl=57 time=22.345 ms
1480 bytes from <public ip>: icmp_seq=17 ttl=57 time=26.154 ms
1480 bytes from <public ip>: icmp_seq=18 ttl=57 time=22.927 ms
1480 bytes from <public ip>: icmp_seq=19 ttl=57 time=22.446 ms
1480 bytes from <public ip>: icmp_seq=20 ttl=57 time=22.330 ms
^C
--- ugfx.io ping statistics ---
21 packets transmitted, 21 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 22.300/23.544/31.586/2.452 ms
root@hydrogen:~ # ping -D -s 1472 <vpn ip>
PING <vpn ip> (<vpn ip>): 1472 data bytes
1480 bytes from <vpn ip>: icmp_seq=0 ttl=64 time=22.225 ms
1480 bytes from <vpn ip>: icmp_seq=1 ttl=64 time=39.159 ms
1480 bytes from <vpn ip>: icmp_seq=2 ttl=64 time=36.652 ms
1480 bytes from <vpn ip>: icmp_seq=3 ttl=64 time=22.489 ms
1480 bytes from <vpn ip>: icmp_seq=4 ttl=64 time=22.522 ms
1480 bytes from <vpn ip>: icmp_seq=5 ttl=64 time=32.174 ms
1480 bytes from <vpn ip>: icmp_seq=6 ttl=64 time=33.556 ms
1480 bytes from <vpn ip>: icmp_seq=7 ttl=64 time=25.334 ms
1480 bytes from <vpn ip>: icmp_seq=8 ttl=64 time=43.357 ms
1480 bytes from <vpn ip>: icmp_seq=9 ttl=64 time=22.895 ms
1480 bytes from <vpn ip>: icmp_seq=10 ttl=64 time=31.934 ms
1480 bytes from <vpn ip>: icmp_seq=11 ttl=64 time=22.631 ms
1480 bytes from <vpn ip>: icmp_seq=12 ttl=64 time=22.621 ms
1480 bytes from <vpn ip>: icmp_seq=13 ttl=64 time=22.674 ms
1480 bytes from <vpn ip>: icmp_seq=14 ttl=64 time=23.029 ms
1480 bytes from <vpn ip>: icmp_seq=15 ttl=64 time=31.082 ms
1480 bytes from <vpn ip>: icmp_seq=16 ttl=64 time=22.682 ms
1480 bytes from <vpn ip>: icmp_seq=17 ttl=64 time=22.664 ms
1480 bytes from <vpn ip>: icmp_seq=18 ttl=64 time=22.542 ms
1480 bytes from <vpn ip>: icmp_seq=19 ttl=64 time=29.180 ms
1480 bytes from <vpn ip>: icmp_seq=20 ttl=64 time=22.479 ms
^C
--- <vpn ip> ping statistics ---
21 packets transmitted, 21 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 22.225/27.328/43.357/6.381 ms

Note that both servers are in a data center (one in Switzerland, one in Holland). It’s not like the webserver would run on some raspberry via WLAN :stuck_out_tongue:

The root cause is your switzerland server embedded.simulton.com, which is serving those CSS and javascript files terribly slow, from multiple locations:

From Amsterdam (time out):
https://www.webpagetest.org/result/180228_J9_644f986b1153d6772aafffa30a27776e/

From Paris (time out):
https://www.webpagetest.org/result/180228_65_f7c145be71e6bd22672380e2cb01c464/

From Brussels (time out):
https://www.webpagetest.org/result/180228_ZP_f8c88bed67b3ab84929a980b8e502ee0/

Well, embedded.simulton.com points to the HAproxy. The reason you got a timeout is because I was working on it. It’s a Wordpress site so it’s not easy to give you access to both the direct line and through the HAproxy as the Wordpress site relies on the base URL.

The HAproxy is in Switzerland. The webserver is in Holland. The same webserver also serves other websites such as fwtools.embedded.pro (yet another Wordpress).

Right, but there is not much we can do at this point, since without haproxy you have the very same problem, right?

No, I don’t have the problem at all without HAproxy. If I remove the HAproxy and change the DNS record to the web server’s public IP everything loads in less than a second. It’s the same with the Jenkins dashboard and any other web site I tried to route through the HAproxy.

Then there was some kind of misunderstanding here.

When I requested the wget output I meant: do it on the haproxy server, but NOT THROUGH haproxy itself. Instead, from same system query the backend directly. This way we are testing from on datacenter to the other, through OpenVPN but without haproxy.

Also please provide:

  • the output of haproxy -vv
  • confirm that system and userspace CPU load
  • logs: the few lines you provided show request with expected values (below 100 ms); do you see any logs with spikes in those numbers?

Ah yes, then I misunderstood you indeed - sorry!

Here’s the result when I use curl -O to download said file. First over web server’s public IP and then over VPN. They both download instantly (this is being executed on the HAproxy machine):

root@hydrogen:~ # wget "84.22.111.55/wp-content/themes/rtpanel/style.css?ver=4.2"
--2018-02-28 21:07:53--  http://84.22.111.55/wp-content/themes/rtpanel/style.css?ver=4.2
Connecting to 84.22.111.55:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/css]
Saving to: 'style.css?ver=4.2.3'

style.css?ver=4.2.3                     [ <=>                                                              ]  30.89K  --.-KB/s    in 0.02s

2018-02-28 21:07:54 (1.32 MB/s) - 'style.css?ver=4.2.3' saved [199201]

root@hydrogen:~ # wget "10.8.0.18/wp-content/themes/rtpanel/style.css?ver=4.2"
--2018-02-28 21:08:19--  http://10.8.0.18/wp-content/themes/rtpanel/style.css?ver=4.2
Connecting to 10.8.0.18:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/css]
Saving to: 'style.css?ver=4.2.4'

style.css?ver=4.2.4                     [ <=>                                                              ]  30.89K  --.-KB/s    in 0.04s

2018-02-28 21:08:19 (692 KB/s) - 'style.css?ver=4.2.4' saved [199201]

Everything seems in order there.

Here’s haproxy -vv:

root@hydrogen:~ # haproxy -vv
HA-Proxy version 1.7.9 2017/08/18
Copyright 2000-2017 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = freebsd
  CPU     = generic
  CC      = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1 USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built without Lua support
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
     kqueue : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available filters :
        [SPOE] spoe
        [TRACE] trace
        [COMP] compression

I can confirm that the CPU usage on the HAproxy machine is never above 0.5% (even when there are running requests) and there are 30 GB of memory free.

I scrolled through the logs (through a lot) and I never found any values higher than what we’ve seen so far. No spikes.

Thank you for your help, I really appreciate it!

Indeed, everything seems alright with those outputs.

Let’s try some simple configuration changes to see if we can get the behavior to change (for the better or the worse):

  • put nokqueue into the global section
  • put both nokqueue and nopoll into the global section
  • change timeouts, use timeout client 15s and timeout server 30s

Make sure haproxy is properly restarted (no old haproces process still runs in the background with an old configuration).

If there is no effect with those changes, revert them. We are gonna need the big guns at this point:

  • since you are on FreeBSD 64bit, strace is not an option. Install truss and attached it to running haproxy process (truss -dfp <PID>), and try to convert the truss output to the actual timestamp (unfortunately truss lacks this very obvious feature)
  • capture the frontend haproxy connection, something like tcpdump -pns0 -w frontend.cap host 127.0.0.1 and tcp port 80
  • capture the backend haproxy connection, something like tcpdump -pns0 -w backend-10.8.0.18.cap host 10.8.0.18 and tcp port 80
  • capture the haproxies log output
  • capture the http log of your nginx backend instance

Then make a single wget requests through haproxy, and make sure you got all 5 outputs (truss output, frontend capture, backend capture, haproxy logs, backend servers logs).

Both systems should be NTP synched, so that we can compare the logs on the haproxy box with those from the backend.

1 Like

Please excuse the late reply.

What I did in the meantime is setting up a FreeBSD 11 on a VPS with 1G/1G internet connection. I copied the exact same HAproxy configuration file and everything works extremely well (even over OpenVPN).

I currently suspect that there’s a network problem with the HAproxy machine. However, I’m not yet sure how to determine the problem. Even if I SCP files from it to another machine it starts going down in speed with larger files (over like 1MB) and at very large files the speed drops to < 4kb/s and SCP reports - stalled -.
I’d appreciate any help on how to debug this.

The next thing I’ll test is not having the HAproxy as the OpenVPN server. With the VPS that I setup for testing the HAproxy instance was an OpenVPN client, not the server.

joel you have no ideas how helpful your thread is for me! i was going crazy upon this EXTREMELY slow page load problem. i have not yet tried your advices but i will. can i ask you questions in case somethign would go wrong? thanks again

Everything is working well now. There was a networking issue as suggested by @lukastribus. The problem was the firewall/NAT configuration. This wasn’t HAproxy related at all.

Thank you very much for your help - much appreciated!