Deadlock in one of the recent commits?

Anyone else noticed a problem with this? (Its only used for dev / testing i now its not a stable release).

GDB shows HAProxy using 100% cpu and waiting here…

(gdb) backtrace

#0 0x00005565860abb8a in pendconn_redistribute (s=s@entry=0x55658703c480) at src/queue.c:338

#1 0x0000556586013eca in srv_update_status (s=s@entry=0x55658703c480) at src/server.c:4575

#2 0x00005565860157bc in srv_set_stopped (s=0x55658703c480, reason=reason@entry=0x0, check=<optimized out>) at src/server.c:940

#3 0x0000556586015841 in srv_set_stopped (s=<optimized out>, reason=reason@entry=0x0, check=<optimized out>) at src/server.c:916

#4 0x000055658604a2c2 in check_notify_failure (check=check@entry=0x55658703c8c8) at src/checks.c:319

#5 0x00005565860529f8 in process_chk_conn (state=<optimized out>, context=0x55658703c8c8, t=0x556587524060) at src/checks.c:2216

#6 process_chk (t=0x556587524060, context=0x55658703c8c8, state=<optimized out>) at src/checks.c:2310

#7 0x00005565860aaff7 in process_runnable_tasks () at src/task.c:384

#8 0x00005565860594d1 in run_poll_loop () at src/haproxy.c:2386

#9 run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:2451

#10 0x0000556585fb40fe in main (argc=<optimized out>, argv=0x7ffe9c293578) at src/haproxy.c:3053

CC’ing @willy @capflam

There’s also a comment on the github commit page regarding the end of srv_set_stopping here:

Hi Lukas!

yes I’ve noticed this one, the reporter also sent me some private emails without any config unfortunately, but I figured what happens in the code. He very likely has a “track” directive in his config and a state change is propagated over multiple servers, resulting in thread_isolate()/thread_release() to be called recursively and thus breaking the rendez-vous point.

I’m seeing two possible fixes for this : the easy but ugly one being to use two different code paths depending on thread_isolated() to decide whether or not to isolate/release. The clean one consisting in implementing a thread-local counter for the isolation. I actually think this one would be clean because we don’t need to be that careful anymore, we’d naturally support recursivity.

While analysing this, I just found (and fixed) an inconsistency caused by the switch back to synchronous changes : the server updates were applied after the propagation and not before. This was made to simplify the code apparently. Now it causes the tracking servers to see incomplete updates and to remain down while the tracked server is up. I’ve moved this update back to its place before the loop.

I’m now checking if I can reliably reproduce the deadlock to be certain I fix it correctly.

In fact it’s not exactly this since there doesn’t seem to be any recursive call while the thread is isolated inside the srv_* code. However what I’m seeing is that we may call pendconn_redistribute() which grabs the server’s lock, and this code is called with the server’s lock already held. This could be the problem. I need to check if it makes sense to take this lock there now.

Ah, indeed with testing some potential patch solutions i ended up with the same kind of issue (Server remaining down).

I can easily test / simulate the lock with my configuration if needed.

Francis, if you have a simple config showing the bug in a reproducible manner, I would really appreciate it. I have some ideas about how to deal with it but all of them are quite complex and before spending time on one of them I would like to be sure I’m not missing a part of the problem.
Thanks!

for those not on the ML, fix uploaded.