I have a multi-tier balancing setup, and to ensure that config changes to the “downstream” (second-in-line) haproxy instances are still working when their config changes (frequently; this is a multi-tenanted SaaS environment) the reload process for each haproxy instance involves doing some sanity checks on the newly-reloaded instance before it is returned to service.
To implement this, I have a “healthcheck” frontend which is created with disabled
set. This is the frontend that the “upstream” load balancer hits to test whether to include the downstream instance in the rotation. The theory is that once the sanity checks are completed successfully, the instance can have its “healthcheck” frontend enabled, and life goes on.
However, this doesn’t work in practice. A frontend which is created as “disabled” cannot be enabled. Any attempt to enable the frontend reports, “Frontend was previously shut down, cannot enable”. But… I only disabled it… I didn’t shut it down!
I’m fairly certain that the behaviour is unintentional, and the bug is fairly straightforward to fix:
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2469,7 +2469,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
}
else if (!strcmp(args[0], "disabled")) { /* disables this proxy */
- curproxy->state = PR_STSTOPPED;
+ curproxy->state = PR_STPAUSED;
}
else if (!strcmp(args[0], "enabled")) { /* enables this proxy (used to revert a disabled default) */
curproxy->state = PR_STNEW;
The code being changed goes back to the dawn of time; my guess is simply that nobody in their right mind does things the way I do them…
Anyhoo, I’d be interested in seeing this behaviour changed in 1.6, so I can stop carrying this local patch, or alternately some pointers on what I’m doing wrong, and how I can achieve what I want to achieve in some other manner.