Haproxy and Odoo


Hi, I’m using haproxy for a Odoo instalation, it is based on docker and I have 3 containers, one for the database, one for odoo and one for hapoxy.

I’m receiving this error:

4/7/2017 08:36:14Traceback (most recent call last):
4/7/2017 08:36:14 File “/usr/lib/python2.7/dist-packages/openerp/service/server.py”, line 721, in run
4/7/2017 08:36:14 self.process_work()
4/7/2017 08:36:14 File “/usr/lib/python2.7/dist-packages/openerp/service/server.py”, line 754, in process_work
4/7/2017 08:36:14 self.process_request(client, addr)
4/7/2017 08:36:14 File “/usr/lib/python2.7/dist-packages/openerp/service/server.py”, line 745, in process_request
4/7/2017 08:36:14 self.server.process_request(client, addr)
4/7/2017 08:36:14 File “/usr/lib/python2.7/SocketServer.py”, line 321, in process_request
4/7/2017 08:36:14 self.finish_request(request, client_address)
4/7/2017 08:36:14 File “/usr/lib/python2.7/SocketServer.py”, line 334, in finish_request
4/7/2017 08:36:14 self.RequestHandlerClass(request, client_address, self)
4/7/2017 08:36:14 File “/usr/lib/python2.7/SocketServer.py”, line 657, in init
4/7/2017 08:36:14 self.finish()
4/7/2017 08:36:14 File “/usr/lib/python2.7/SocketServer.py”, line 716, in finish
4/7/2017 08:36:14 self.wfile.close()
4/7/2017 08:36:14 File “/usr/lib/python2.7/socket.py”, line 279, in close
4/7/2017 08:36:14 self.flush()
4/7/2017 08:36:14 File “/usr/lib/python2.7/socket.py”, line 303, in flush
4/7/2017 08:36:14 self._sock.sendall(view[write_offset:write_offset+buffer_size])
4/7/2017 08:36:14timeout: timed out

I trace it and it seems to be related to haproxy health checks, I saw a similar problem here:


But I don’t know how to configure haproxy to do the right health checks or to disable them.

Someone can help me please.



We can help you with haproxy issues, but we are unable to help with containers, config orchestration etc.

So if you are working on the haproxy.cfg configuration, just remove the “check” argument on the server line, to disable health checks.

Or you can try to consume the response like this workaround:

But the real solution is to fix the backend bug.


Hi, thanks a lot for your answer, I’ll look at this.