Event Loop vs Multithread blocking IO

Unknown picture Unknown · Jun 5, 2009 · Viewed 7.1k times · Source

I was reading a comment about server architecture.

http://news.ycombinator.com/item?id=520077

In this comment, the person says 3 things:

  1. The event loop, time and again, has been shown to truly shine for a high number of low activity connections.
  2. In comparison, a blocking IO model with threads or processes has been shown, time and again, to cut down latency on a per-request basis compared to an event loop.
  3. On a lightly loaded system the difference is indistinguishable. Under load, most event loops choose to slow down, most blocking models choose to shed load.

Are any of these true?

And also another article here titled "Why Events Are A Bad Idea (for High-concurrency Servers)"

http://www.usenix.org/events/hotos03/tech/vonbehren.html

Answer

sivabudh picture sivabudh · Nov 16, 2009

Typically, if the application is expected to handle million of connections, you can combine multi-threaded paradigm with event-based.

  1. First, spawn as N threads where N == number of cores/processors on your machine. Each thread will have a list of asynchronous sockets that it's supposed to handle.
  2. Then, for each new connection from the acceptor, "load-balance" the new socket to the thread with the fewest socket.
  3. Within each thread, use event-based model for all the sockets, so that each thread can actually handle multiple sockets "simultaneously."

With this approach,

  1. You never spawn a million threads. You just have as many as as your system can handle.
  2. You utilize event-based on multicore as opposed to a single core.