Server Scalability - HTML 5 websockets vs Comet

P.K picture P.K · Feb 2, 2012 · Viewed 26.2k times · Source

Many Comet implementations like Caplin provide server scalable solutions.

Following is one of the statistics from Caplin site:

A single instance of Caplin liberator can support up to 100,000 clients each receiving 1 message per second with an average latency of less than 7ms.

How does this to compare to HTML5 websockets on any webserver? Can anyone point me to any HTML 5 websockets statistics?

Answer

Martin Tyler picture Martin Tyler · Feb 3, 2012

Disclosure - I work for Caplin.

There is a bit of misinformation on this page so I'd like to try and make it clearer..

I think we could split up the methods we are talking about into three camps..

  1. Comet HTTP polling - including long polling
  2. Comet HTTP streaming - server to client messages use a single persistent socket with no HTTP header overhead after initial setup
  3. Comet WebSocket - single bidirectional socket

I see them all as Comet, since Comet is just a paradigm, but since WebSocket came along some people want to treat it like it is different or replaces Comet - but it is just another technique - and unless you are happy only supporting the latest browsers then you can't just rely on WebSocket.

As far as performance is concerned, most benchmarks concentrate on server to client messages - numbers of users, numbers of messages per second, and the latency of those messages. For this scenario there is no fundamental difference between HTTP Streaming and WebSocket - both are writing messages down an open socket with little or no header or overhead.

Long polling can give good latency if the frequency of messages is low. However, if you have two messages (server to client) in quick succession then the second one will not arrive at the client until a new request is made after the first message is received.

I think someone touched on HTTP KeepAlive. This can obviously improve Long polling - you still have the overhead of the roundtrip and headers, but not always the socket creation.

Where WebSocket should improve upon HTTP Streaming in scenarios where there are more client to server messages. Relating these scenarios to the real world creates slightly more arbitrary setups, compared to the simple to understand 'send lots of messages to lots of clients' which everyone can understand. For example, in a trading application, creating a scenario where you include users executing trades (ie client to server messages) is easy, but the results a bit less meaningful than the basic server to client scenarios. Traders are not trying to do 100 trades/sec - so you end up with results like '10000 users receiving 100 messages/sec while also sending a client message once every 5 minutes'. The more interesting part for the client to server message is the latency, since the number of messages required is usually insignificant compared to the server to client messages.

Another point someone made above, about 64k clients, You do not need to do anything clever to support more than 64k sockets on a server - other than configuring the number file descriptors etc. If you were trying to do 64k connection from a single client machine, that is totally different as they need a port number for each one - on the server end it is fine though, that is the listen end, and you can go above 64k sockets fine.