tornado vs wsgi(with gunicorn)

schemacs picture schemacs · Feb 27, 2014 · Viewed 17.9k times · Source

I read this about Tornado:

On the other hand, if you already have a WSGI app and want to run it on a blazing fast tornado.httpserver.HTTPServer, wraps it with tornado.wsgi.WSGIContainer. But you need to be careful. Since your original application is not prepared for an asynchronous server, and will make a lot of IO/computation, it will block other requests while generating a response (further requests will be accepted and buffered but queued for later handling).

And Guincorn says:

'a Python WSGI HTTP Server for UNIX. It’s a pre-fork worker model ported from Ruby’s Unicorn project.

  1. So Gunicorn will spawn worker process to handle the request?
  2. One concurent request per worker?
  3. While tornado will use epoll or kqueue to do the work in one process(no master/worker process)?
  4. So if I use blocking call(like requests.get in the handler's get/post function), This will block all the request handling or only the current request being handled?

Answer

Graham Dumpleton picture Graham Dumpleton · Feb 27, 2014

If you are running a WSGI application on top of Tornado, then there isn't a great deal of difference between Tornado and gunicorn in as much as nothing else can happen so long as the WSGI request is being handled by the application in a specific process. In the case of gunicorn because it only has one thread handling requests and in the case of Tornado because the main event loop will never get to run during that time to handle any concurrent requests.

In the case of Tornado there is actually also a hidden danger.

For gunicorn, because there is only one request thread, the worker process will only accept one web request at a time. If there are concurrent requests hitting the server, they will be handled by any other available worker processes instead as they all share the same listener socket.

With Tornado though, the async nature of the layer under the WSGI application means that more than one request could get accepted at the same time by one process. They will initially interleave as the request headers and content is read, which Tornado pre reads into memory before calling the WSGI application. When the whole request content has then been read, control will be handed off to the WSGI application to handle that one request. In the mean time, the concurrent request being handled by the same process for which the request headers and content hadn't yet been read, will be blocked for as long as the WSGI application takes to handle the first request.

Now if you only have the one Tornado process this isn't a big deal as the requests would be serialised anyway, but if you are using tornado worker mode of gunicorn so as to enable multiple Tornado worker processes sharing the same listener sockets this can be quite bad. This is because the greedy nature of individual processes resulting from the async layer, means requests can get blocked in a process when there could have been another worker process which could have handled it.

In summary, for a single Tornado web server process you are stuck with only being able to handle one request at a time. In gunicorn you can have multiple worker process to allow requests to be handle concurrently. Use a multi process setup with Tornado though and you risk having requests blocked.

So Tornado can be quite good for very small custom WSGI application where they don't do much and so response is very quick, but it can suffer where you have long running requests running under a blocking WSGI application. Gunicorn will therefore be better as it has proper ability to handle concurrent requests. With gunicorn being single threaded though and needing multiple worker processes though, it will use much more memory.

So they both have tradeoffs and in some cases you can be better off using a WSGI server that offers concurrency through multithreading as well as multiple worker processes. This allows you to handle concurrent requests, but not blow out memory usage through needing many worker processes. At the same time, you need to balance the number of threads per process with using multiple processes so as not to suffer unduly from the effects of the GIL in a CPU heavy application.

Choices for WSGI servers with multithreading abilities are mod_wsgi, uWSGI and waitress. For waitress though you are limited to a single worker process.

In all, which is the best WSGI server really depends a lot on the specifics of your web application. There is no one WSGI server which is the best at everything.