Why is the Apache Event MPM Performing Poorly?

Jaimie Sirovich picture Jaimie Sirovich · Jan 9, 2015 · Viewed 10.7k times · Source

The Event MPM is not exactly the same design as Nginx, but was clearly designed to make keepalives more tenable and sending static files faster. My understanding is that the Event MPM is a bit of a misnomer because:

  1. Although the connection is passed to kqueue/epoll,
  2. certain very important modules such as mod_gzip and mod_ssl will block/consume a thread until the response is done,
  3. and that is an issue for large files, but probably not for PHP-generated HTML documents, etc.

Unfortunately, Apache keeps losing marketshare, and most benchmarks are damning for the event MPM. Are the benchmarks flawed, or does the event MPM really do so poorly against Nginx? Even with these limitations, under normal traffic (non-malicious) and smaller files, it should be somewhat competitive with Nginx. For example, it should be competitive serving PHP-generated documents via php-fpm on slow connections because the document will be buffered (even if being ssl'd and gzip'd) and sent asynchronously. Both SSL and non-SSL connections using compression or not should not work meaningfully differently than they would in Nginx on such a workload.

So why does it not shine in various benchmarks? What's wrong with it? Or what's wrong with the benchmarks? Is a major site using it as an appeal to authority that it can perform?

Answer

Ben Grimm picture Ben Grimm · Jan 14, 2015

It is slower than nginx because Apache with the event MPM is (very) roughly equivalent to an event-driven HTTP proxy (nginx, varnish, haproxy) in front of Apache with the worker MPM. Event is worker, but rather than handing each new connection to a thread for its lifetime, the event MPM's threads hand the connection to a secondary thread which pushes it onto a queue or closes it if keep-alive is off or has expired.

The real benefit of event over worker is the resource usage. If you need to sustain 1,000 concurrent connections, the worker MPM needs 1,000 threads, while the event MPM may get by with 100 active threads and 900 idle connections managed in the event queue. The event MPM will use a fraction of the resources of the worker MPM in that hypothetical, but the downside is still there: each of those requests is handled by a separate thread which must be scheduled by the kernel and as such will incur the cost of switching context.

On the other hand we have nginx which uses the event model itself as its scheduler. Nginx simply processes as much work on each connection as it can before moving on to the next one. No extra context switching required.

The one use case where the event MPM really shines is to handle a setup where you have a heavy application running in Apache, and to conserve the resources of threads that are idle during keep-alive, you would deploy a proxy (such as nginx) in front of apache. If your front end served no other purpose (e.g. static content, proxying to other servers, etc...), the event MPM handles that use case beautifully and eliminates the need for a proxy.