Is level triggered or edge triggered more performant?

Alex picture Alex · Dec 12, 2012 · Viewed 7.1k times · Source

I am trying to figure out what is more performant, edge triggered or level triggered epoll.

Mainly I am considering "performant" as:

  1. Ability to handle multiple connections without degredation.

  2. Ability to keep the uptmost speed per inbound message.

I am actually more concerned about #2, but #1 is also important.

I've been running tests with a single threaded consumer (accept/read multiple socket connections using epoll_wait), and multiple producers.

So far I've seen no difference, even up to 1000 file descriptors.

I've been laboring under the idea (delusion?) that edge triggered should be more performant because less interupts will be received. Is this a correct assumption?

One issue with my test, that might be masking performance differences, is that I don't dispatch my messages to threads once they are received, so the less interrupts don't really matter. I've been loath to do this test because I've been using __asm__ rdtsc to get my "timestamps," so I don't want to have to reconcile what core my original timestamp came from.

What makes me even more suspicious is that level triggered epoll performs better on some benchmarks I've seen.

Which is better? Under what circumstances? Is there no difference? Any insights would be appreciated.

EDIT:

My sockets are non-blocking.

Answer

cmeerw picture cmeerw · Dec 13, 2012

I wouldn't expect to see a huge performance difference between edge and level triggered.

For edge-triggered you always have to drain the input buffer, so you have one useless (just returning EWOULDBLOCK) recv syscall. But for level triggered you might use more epoll_wait syscalls. As the man page points out, avoiding starvation might be slightly easier in level triggered mode.

The real difference is that when you want to use multiple threads you'll have to use edge-triggered mode (although you'll still have to be careful with getting synchronization right).