Haproxy sends the request to backend server having Disk I/O Error in which OS goes into read only mode and none of the command works becuase hardware storage controller failure.
Since application i.e. Nginx is running on backed server and application port is reachable and IP is accessible, HAproxy doesnt considers that as down and still sends the request on that faulty server.
Application is running so even the httpcheck for sample check.txt file is successful and thus request on that backend server starts failing as application i.e. Nginx running on it is not able to process the new request.
Is there a way to prevent sending the request to faulty server having Disk I/O Error.
Instead of checking a static text file, have the application return its health in an actual dynamic response that haproxy than checks periodically.
option httpchk for example considers 2xx and 3xx responses valid and everything else as a failure:
Does it mean to check response code of Nginx application periodically using httpchk.
Since application is running it will always return 200 .
Will it address the Server Disk issue ?
Haproxy will not magically address your server disk issue.
As I said, your application has to check and return the health status, and the return code is one of the ways to do it.
It is not possible with HAproxy that identify the I/o error in disk of backend server. Instead, it check http response from backend server, if it gets means health is UP else down.
Agree Haproxy will not be able to detect I/O Error.
But in case of I/O Error application is still running on that port and IP.
Is there a way Haproxy can POST request instead of GET on backend server
Running a script on backend server to check if there is I/O Error and stop the application is also not working as during I/O error no command works on server.
option httpchk HEAD /check.txt HTTP/1.0
server ems1 rcpems01.cdnsrv.ril.com:80 redir http://rcpems01.cdnsrv.ril.com:80/ check port 8089
server ems2 rcpems02.cdnsrv.ril.com:80 redir http://rcpems02.cdnsrv.ril.com:80/ check port 8089
Its a bad idea and you should to what we proposed instead: providing an application endpoint that does local checks (io/db checks for example).
But yes, I suppose you can do it, just construct a proper POST request: