Isn't a GSLB useless for georouting?

Dear community,

I have been reading about GSLB’s and I fail to understand how it would benefit georouting with unicast addresses and the following setup:

  1. Assume there to be a website W.
  2. Let there be 3 replicated servers located in location A,B and C hosting W
  3. Let the GSLB be located at location B.
  4. Finally let the nameservers of W be located at location B as well

Traffic route

  1. User 1 at location A makes a request to website W
  2. The request hits the nameservers at location B and will forward the request to the GSLB at location B.
  3. Then based on geolocation the GSLB at location B will send the request to the server at location A.
  4. Then the answer will go back to the GSLB at location B
  5. Then it will be send to user 1 at location A.

The problem is this: Even though the request is send to the server closest to A in the end all the traffic is still routed through location B. Doesn’t this destroy the whole concept reducing latency to the nearest server.

Hopefully someone could confirm this, tell that I am wrong or have a better solution for this problem.

Cheers!!!

I just realised after I did a long response: I missed one word in your question ‘unicast’!
You need public IPs for each unique end point for GSLB.

IF you have real public end points then:

My usual answer would be yes, but…

First of all GSLB is just a DNS server with health checks + some routing logic.
So in your scenario it is only the first DNS request that hops around the sites.
The actual HTTP request goes direct to the Site/Server that it has been told to … AND that is the bit you are trying to speed up or send to the right place.

Most people think GSLB is used primarily for closest geolocation…
But why would you not use Cloudflare for that?

What GSLB is awesome for is topology based routing between data centers. You can get away with one GSLB like your example, but usually you have 2 GSLBs at each site - AND they ALL have an identical configuration. i.e. they all have the same list of source subnets that they associate with each data center. So that when a client from data center A requests the application is is always sent to a server (or HAProxy with a bunch of servers behnind it) in Data center A.

Ps. If you dont know all the subnets, you can turn off EDNS, make sure local clients use the local GSLB and use the IP address of the GSLB as your topology indicator.

Which can save you a ton of money in WAN costs etc.

Obviously all of the end points are regularly health checked.
And naturally it automatically supports data center HA.

For external traffic you generally dont care where it goes, and would prbably route it through Cloudflare anyway…

And for realy BIG sites think object storage you can either horizontally scale lots of HAProxy clusters… Or get rid of all the reverse proxys, and use a feedback agent to dynamically tell the GSLB how much traffic to send to each server which I call direct to node GSLB.

Ps. We use the open source Polaris for our GSLB - works great!