Unexpected gRPC stream resets (CD + CANCELLED) during scale‑out

Hello,
I’m using HAProxy Community Edition (3.3-dev14-8418c00) in OpenShift (4.X) to load-balance gRPC (HTTP/2) traffic to backend pods via server-template and A‑record DNS discovery.

During Pod scale‑out, I consistently see:

  • HAProxy log entries like CD
  • gRPC server-side errors: “Context was CANCELLED”

As I see it:

  • Openshift adds a new pod and DNS A-record expand from (IP1) → (IP1, IP2)
  • HaProxy assigns a new pod to the lowest free server-template slot
  • slot transition from MAINT to READY
  • Immediately (within milliseconds to seconds), HAProxy logs CD and some gRPC streams are cancelled.

Before scalling:

slots:

printf "show stat\n" | socat /tmp/haproxy-admin.sock stdio
grpc_service,FRONTEND,,,0,0,4096,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,0,,,,0,0,0,0,0,0,,0,0,0,,,0,0,0,0,,,,,,,,,,,,,,,,,,,,,http,,0,0,0,0,0,0,0,,,0,0,,,,,,,0,,,,,,,,,,0,0,0,0,0,0,0,0,,,0,0,0,0,-,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
grpc_servers,items-api1,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,1,0,2712,0,,1,3,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,,,,0,0,0,,,,,-1,,,0,0,0,0,,,,Layer4 check passed,,2,3,4,,,,x.x.x.145:29090,,http,,,,,,,,0,0,0,,,0,,0,0,0,0,0,0,0,0,1,1,,,,0,,,,,,,,,,0,0,0,0,0,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
grpc_servers,items-api2,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT (resolution),1,1,0,0,1,2712,2712,,1,3,2,,0,,2,0,,0,,,,0,0,0,0,0,0,,,,0,0,0,,,,,-1,,,0,0,0,0,,,,,,,,,,,,,,http,,,,,,,,0,0,0,,,0,,0,0,0,0,0,0,0,0,0,1,,,,0,,,,,,,,,,0,0,0,0,0,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
grpc_servers,items-api3,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT (resolution),1,1,0,0,1,2712,2712,,1,3,3,,0,,2,0,,0,,,,0,0,0,0,0,0,,,,0,0,0,,,,,-1,,,0,0,0,0,,,,,,,,,,,,,,http,,,,,,,,0,0,0,,,0,,0,0,0,0,0,0,0,0,0,1,,,,0,,,,,,,,,,0,0,0,0,0,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
grpc_servers,items-api4,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT (resolution),1,1,0,0,1,2712,2712,,1,3,4,,0,,2,0,,0,,,,0,0,0,0,0,0,,,,0,0,0,,,,,-1,,,0,0,0,0,,,,,,,,,,,,,,http,,,,,,,,0,0,0,,,0,,0,0,0,0,0,0,0,0,0,1,,,,0,,,,,,,,,,0,0,0,0,0,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

haproxy.cfg

global
  maxconn 4096
  log stdout format rfc5424 local0 info
  stats socket /tmp/haproxy-admin.sock mode 600 level admin expose-fd listeners

defaults
  mode http
  log global
  option dontlognull
  option log-separate-errors
  timeout connect         5s
  timeout client          30s
  timeout server          30s

frontend grpc_service
  bind :29090 ssl crt /certs alpn h2
  option httplog
  default_backend grpc_servers

resolvers k8s
  parse-resolv-conf
  resolve_retries 10
  timeout resolve 5s
  timeout retry 5s
  accepted_payload_size 8192

backend grpc_servers
  balance leastconn
  option redispatch
  retry-on all-retryable-errors
  server-template items-api 4 grpc-items-api.team-delta.svc.cluster.local:29090 resolvers k8s init-addr libc,none check proto h2

During and after scalling:

haproxy.logs

[WARNING]  (8) : grpc_servers/items-api2: IP changed from '(none)' to 'x.x.x.218' by 'DNS cache'.
<133>1 2026-04-02T07:52:50.344869+00:00 - haproxy 8 - - grpc_servers/items-api2: IP changed from '(none)' to 'x.x.x.218' by 'DNS cache'.
<133>1 2026-04-02T07:52:50.344900+00:00 - haproxy 8 - - Server grpc_servers/items-api2 ('grpc-items-api.team-delta.svc.cluster.local') is UP/READY (resolves again).
<133>1 2026-04-02T07:52:50.344911+00:00 - haproxy 8 - - Server grpc_servers/items-api2 administratively READY thanks to valid DNS answer.
[WARNING]  (8) : Server grpc_servers/items-api2 ('grpc-items-api.team-delta.svc.cluster.local') is UP/READY (resolves again).
[WARNING]  (8) : Server grpc_servers/items-api2 administratively READY thanks to valid DNS answer.
<134>1 2026-04-02T07:52:52.182887+00:00 - haproxy 8 - - x.x.x.2:52274 [02/Apr/2026:07:52:51.629] grpc_service~ grpc_servers/items-api1 0/0/0/547/553 200 156845 - - ---- 20/20/16/7/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:52:52.573637+00:00 - haproxy 8 - - x.x.x.2:38158 [02/Apr/2026:07:52:50.505] grpc_service~ grpc_servers/items-api2 0/0/0/2067/2067 200 135 - - ---- 20/20/16/8/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:52:52.583689+00:00 - haproxy 8 - - x.x.x.2:52274 [02/Apr/2026:07:52:52.540] grpc_service~ grpc_servers/items-api1 0/0/0/42/42 200 4785 - - ---- 20/20/14/6/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
 ...                                                    
<134>1 2026-04-02T07:53:28.281903+00:00 - haproxy 8 - - x.x.x.2:38134 [02/Apr/2026:07:53:28.258] grpc_service~ grpc_servers/items-api2 0/0/0/23/23 200 32177 - - ---- 20/20/8/5/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:53:28.285081+00:00 - haproxy 8 - - x.x.x.2:38086 [02/Apr/2026:07:53:28.242] grpc_service~ grpc_servers/items-api1 0/0/0/42/42 200 31810 - - ---- 20/20/6/1/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<131>1 2026-04-02T07:53:28.310394+00:00 - haproxy 8 - - x.x.x.2:38074 [02/Apr/2026:07:53:27.691] grpc_service~ grpc_servers/items-api2 0/0/0/-1/619 400 0 - - CD-- 20/20/5/4/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<131>1 2026-04-02T07:53:28.310442+00:00 - haproxy 8 - - x.x.x.2:38116 [02/Apr/2026:07:53:28.159] grpc_service~ grpc_servers/items-api2 0/0/0/-1/151 400 0 - - CD-- 18/18/3/2/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<131>1 2026-04-02T07:53:28.310455+00:00 - haproxy 8 - - x.x.x.2:38108 [02/Apr/2026:07:53:28.260] grpc_service~ grpc_servers/items-api2 0/0/0/-1/49 400 0 - - CD-- 18/18/3/2/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:53:28.474067+00:00 - haproxy 8 - - x.x.x.2:52288 [02/Apr/2026:07:53:28.412] grpc_service~ grpc_servers/items-api1 0/0/0/61/61 200 646 - - ---- 12/12/7/3/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:53:28.482672+00:00 - haproxy 8 - - x.x.x.2:52278 [02/Apr/2026:07:53:28.418] grpc_service~ grpc_servers/items-api2 0/0/0/63/64 200 24872 - - ---- 18/18/6/3/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
...                                                     
<134>1 2026-04-02T07:54:04.957568+00:00 - haproxy 8 - - x.x.x.2:52270 [02/Apr/2026:07:54:04.904] grpc_service~ grpc_servers/items-api2 0/0/0/53/53 200 25646 - - ---- 20/20/2/0/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:54:04.986040+00:00 - haproxy 8 - - x.x.x.2:35248 [02/Apr/2026:07:54:04.927] grpc_service~ grpc_servers/items-api1 0/0/0/57/58 200 27708 - - ---- 20/20/3/1/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<131>1 2026-04-02T07:54:05.011431+00:00 - haproxy 8 - - x.x.x.2:52262 [02/Apr/2026:07:54:04.973] grpc_service~ grpc_servers/items-api2 0/0/0/-1/38 400 0 - - CD-- 20/20/3/1/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<131>1 2026-04-02T07:54:05.011642+00:00 - haproxy 8 - - x.x.x.2:35254 [02/Apr/2026:07:54:04.937] grpc_service~ grpc_servers/items-api1 0/0/0/-1/74 400 0 - - CD-- 12/12/2/1/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:54:05.060790+00:00 - haproxy 8 - - x.x.x.2:55860 [02/Apr/2026:07:54:04.959] grpc_service~ grpc_servers/items-api2 0/0/0/100/101 200 225162 - - ---- 9/9/1/0/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"
<134>1 2026-04-02T07:54:05.066897+00:00 - haproxy 8 - - x.x.x.2:55814 [02/Apr/2026:07:54:04.991] grpc_service~ grpc_servers/items-api1 0/0/0/72/74 200 113857 - - ---- 9/9/0/0/0 0/0 "POST https://grpc-items-api-team-delta.ingress.eur02.ocp.foo.net/delta.itemsapi.v1.ItemsService/GetItems HTTP/2.0"

First error on the backend side, suggests that haproxy cancelled request:

Apr 2, 2026 @ 09:52:01.868 (null) gRPC Context was already CANCELLED

Slot state after scaling:

 printf "show stat\n" | socat /tmp/haproxy-admin.sock stdio
grpc_service,FRONTEND,,,20,23,4096,20,1201842,243424633,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,20,,,,0,3274,0,0,0,0,,60,61,3283,,,0,0,0,0,,,,,,,,,,,,,,,,,,,,,http,,0,20,20,0,0,0,0,,,0,0,,,,,,,0,,,,,,,,,,0,0,20,0,0,0,3283,0,,,1201842,1201842,243424633,243424633,-,20,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3283,3343,145,20,0,0,0,0,0,20,9,20,3283,0,0,0,0,0,0,0,0,
grpc_servers,items-api1,0,0,4,20,,2473,906114,188561207,,0,,0,0,4,0,UP,1,1,0,1,0,3204,0,,1,3,1,,2471,,2,28,,40,L4OK,,1,0,2467,0,0,0,0,,,,2467,20,0,,,,,0,,,0,95,2701,6411,,,,Layer4 check passed,,2,3,4,,,,x.x.x.145:29090,,http,,,,,,,,0,110,2363,,,4,,0,60031,27881,91629,0,0,4,4,8,1,,,,0,,,,,,,,,,0,906114,906114,188561207,188561207,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
grpc_servers,items-api2,0,0,5,10,,812,295728,54863426,,0,,0,0,0,0,UP,1,1,0,0,1,37,3167,,1,3,2,,812,,2,31,,49,L4OK,,0,0,807,0,0,0,0,,,,807,0,0,,,,,0,,,0,0,350,1304,,,,Layer4 check passed,,2,3,4,,,,x.x.x.218:29090,,http,,,,,,,,0,82,730,,,1,,0,0,5577,91506,0,0,1,5,8,1,,,,0,,,,,,,,,,0,295728,295728,54863426,54863426,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
grpc_servers,items-api3,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT (resolution),1,1,0,0,1,3204,3204,,1,3,3,,0,,2,0,,0,,,,0,0,0,0,0,0,,,,0,0,0,,,,,-1,,,0,0,0,0,,,,,,,,,,,,,,http,,,,,,,,0,0,0,,,0,,0,0,0,0,0,0,0,0,0,1,,,,0,,,,,,,,,,0,0,0,0,0,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
grpc_servers,items-api4,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT (resolution),1,1,0,0,1,3204,3204,,1,3,4,,0,,2,0,,0,,,,0,0,0,0,0,0,,,,0,0,0,,,,,-1,,,0,0,0,0,,,,,,,,,,,,,,http,,,,,,,,0,0,0,,,0,,0,0,0,0,0,0,0,0,0,1,,,,0,,,,,,,,,,0,0,0,0,0,-,0,0,0,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

The things worth to highlight:

  • show stat confirms backend slot transitions from MAINT (resolution) → UP
  • logs show successful requests before and after cancellations to both backend instances
  • cancellations correlate closely with DNS server updates

As I understand it, when new backend server appears, HAProxy has to reinitialize connections to backend and it forcefully closes requests inprogress.
Is this behaviour expected?
Is this a way to mitigate it for gRPC connection to not face such a connection cancell?

You are using an old development snapshot.

Please switch to a supported release. If you need bleeding edge haproxy 3.3, that would be the latest stable bugfix that is 3.3.6 today, but it would probably be better to use a LTS branch like 3.2 (and therefor 3.2.15 as of today). I don’t see you using any 3.3 exclusive features.

Then I would suggest to try removing leastconn/redispatch and retry-on, just to see if any of those options trigger the issue.

At that point and if switching to a supported release version did not fix the issue, you have enough data to file an issue on the github bugtracker.

Thanks for your help
I have changed image to haproxy:3.2.15-alpine.
And removed those flags as you suggested. The only difference is that I got only CD for old pod.

As you suggested, I will post it on github.