When there is a get request with bigquery some of the portion of that query is not getting logged in haproxy logs.
I have tried to use tune.http.logurilen in global section, increased it from 1024 default to 2048 but there has been no change , the query in logs is getting cut off at the same place.
Tried few other get requests too happens with all.
Please Note : The query however reaches the server correctly intact , the problem is only with logging it completely in haproxy logs
If you increase this limit, you may also increase the
‘log … len yyy’ parameter. Your syslog daemon may also need specific
configuration directives too.
You need to increase the len parameter of the log keyword, which defaults to 1024:
Many thanks for the information after setting the log len it has improved.
However when the query is too large ( In my case the query sent via Get/Post is around 200kb ) Haproxy immediately gives 400 error and doesn’t forward the request to the backend server.
If you could please let me know what parameter i should tweak how to accomodate such big query with out getting 400 error . It would be extremely helpful to me.
Those are insane values, running this in production will probably crash haproxy quickly. Always read the documentation before you use config knobs.
You can use show errors on the admin socket to see why a 400 error is emitted.
That can’t be right, exactly which browser always you do send a 200kb long query? This is an extremely bad idea, you need to put this data in the payload.