Docker + HAProxy + WebSockets

Hello,

We have a node.js websockets service. Due to node.js being singlethread, we are going to create many containers in a server just for sockets. So we decided to turn it into a service but we can’t configure sticky connection. Is there any example that we could do that? (Just to be clear, haproxy will redirect connections to different containers, not to different servers.Also we are going to have more than one host)
Our current docker-compose.yaml but it doesn’t really suceed the sticky connection.

version: '3'

services:

  proxy:
    image: dockercloud/haproxy
    # Won't start until at least one of our app services is up and running.
    depends_on:
      - socket
    environment:
      # The type of load balancing strategy that will be used.
      # - leastconn sends request to the service with the least active requests.
      # - roundrobin rotates the requests around the services.
      - COOKIE = rewrite nocache 
      # Used to identify services.
      - ADDITIONAL_SERVICES=project_dir:socket
    volumes:
      # Since our app services are running on the same port,
      # the HAProxy will use the docker.sock to find the
      # services that it should load balance.
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      # The internal used by the HAProxy is 80,
      # but we can expose any port that we would like externally.
      # For example, if you are running something else on 80,
      # you probably don't want to expose the HAProxy on 80 as well.
      - 12001:80
    networks:
      - web
    deploy:
      # The HAProxy is assigned as the manager.
      placement:
        constraints: [node.role == manager]

  socket:
    environment:
        - SERVICE_PORTS=9800
    ports:
        - 9800:3000
    image: 'umityayla/socket:latest'
    networks:
        - web

networks:
  web:
    driver: overlay

Given that the WebSocket request is “network” compatible with HTTP/1.1 and then it upgrades, it should be compatible with normal HTTP routing, thus all options should apply – at least to the initial connection request.

How to achieve sticky sessions could be accomplished like in case of normal HTTP sticky sessions, which I’ve recently described in another reply:


Although given this is (presumably) a newly developed application, I think it is a considerable design flaw to require sticky sessions.

I’d be really happy if you could give me an example on how can i set cookies based on containers.

Just follow the examples in the documentation pointed above.

well, i’m not doing it through haproxy.cfg; i’m doing it through docker-compose. so I don’t know how to set a cookie based on container.

Unfortunately you’ll have to look into the docker-compose (or related) documentation, because that tool most likely generates the haproxy.cfg. (Although I am very skeptical that they actually support this, given that having “container” stickiness goes kind of against the whole “container mantra”…)


That being said, please see my note about “stickiness” in my previous reply.

Also, now that I think about it, I think it would be quite wasteful to deploy one container per one NodeJS process, and in fact the server resources would be better spent in a setup like N servers, each with M containers, each with P NodeJS processes. (As you’ve noted given that NodeJS is “single-threaded”, and given how “lightweight” it can be, a common setup is to have a bunch of them per server / container.)

Therefore if you go with multiple NodeJS processes per container, implementing “stickiness” is even more unlikely…

To approach the problem from another angle: why does your application require “stickiness”? Perhaps we can find a way around that.

it’s a socket.io application, if a client connects to server x, the next handshake/packet has to be delivered to server x.

Are you referring to this part of the socket.io documentation?
https://socket.io/docs/using-multiple-nodes/

This is due to certain transports like XHR Polling or JSONP Polling relying on firing several requests during the lifetime of the “socket”. Failing to enable sticky balancing will result in the dreaded:

Are you using the socket.io fallback mechanism? Does this requirement apply also to WebSockets? (The documentation isn’t clear.)

we use socket.io and “websocket” transport mechanism.So if we don’t redirect with sticky sessions, our clients can’t have healthy communication with the server(s)

Wrong.

Note what the documentation says:

This is due to certain transports like XHR Polling or JSONP Polling relying on firing several requests during the lifetime of the “socket”.

Again:
during the lifetime of the "socket".

You don’t need sticky sessions for this. You need sticky sessions when the application session needs to stick to the same server across multiple sockets.

Isn’t this about websockets? With websockets the connection between a client and the server is pinned anyway, nothing will be redistributed to different servers anyway during the life of the socket, once it’s upgraded to an actual websocket.

Sticky session with cookies require HTTP. A websocket isn’t.

From what I remember socket.io provides a fallback mechanism in which if the client doesn’t actually support WebSockets, it will fallback to other techniques such as HTTP long-pooling.

However on the “server-side” (i.e. in your NodeJS application), it will still present you with the illusion that you have “websockets” (i.e. bi-directional persistent pipes), which is achieved through multiple HTTP requests.

Therefore the requirement of “sticky sessions”, so that the same NodeJS process receives all the HTTP requests pertaining to a particular “virtual websocket”.

Ok, still, no requirement for stickiness beyond a single socket imho, but whatever the actual, real requirement, the socket.io docs contain examples including a haproxy cookie stickiness configuration (just as haproxy docs itself and various blog posts).

Where we certainly can’t help is with your docker configuration.

1 Like