How does HAProxy on GCP Kubernetes find new Pods coming up and going down


  I have an existing system I am trying to understand at work. 

HAProxy is set as ingress in GCP kubernetes cluster that connects to application server pods.
In its configuration file the server list keeps changing and updating as it finds pods going up and down.

However, I don't see any documentation of how it actually knows which application server is up or down.

 I am trying to debug why HAProxy some times shows some pods are enabled, however the Kubernetes application server pods are already terminated due to being pre-emptive.

 Has anyone run with this setting? It gives 500 status code when a server is not available but HAProxy has it as enabled in its config file.

 Also this incurs a lot of restarts and child process creation when ever K8 scaling happens to the application server pods. Any help on documentation or explanation is appreciated.