Jetty IOException: Too many open files

John Smith picture John Smith · Jun 12, 2011 · Viewed 14.6k times · Source

I'm running Jetty on a website doing around 100 requests/sec, with nginx in front. I just noticed in the logs, only a few minutes after doing a deploy and starting Jetty, that for a little while it was spamming:

java.io.IOException: Too many open files
    at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
    at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:163)
    at org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75)
    at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:673)
    at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
    at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
    at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

For a minute or two. I did an "lsof -u jetty" and saw hundreds of lines of:

java    15892 jetty 1020u  IPv6          298105434        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60839 (ESTABLISHED)
java    15892 jetty 1021u  IPv6          298105438        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60841 (ESTABLISHED)
java    15892 jetty 1022u  IPv6          298105441        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60842 (ESTABLISHED)
java    15892 jetty 1023u  IPv6          298105443        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60843 (ESTABLISHED)

Where 192.168.1.100 is the servers internal IP.

As you can see, this brought the number of open files to the default max of 1024. I could just increase this, but I'm wondering why this happens in the first place? It's in Jetty's nio socket acceptor, so is this caused by a storm of connection requests?

Answer

jpredham picture jpredham · Jun 13, 2011

While there may be a bug in Jetty, I think a far more likely explanation is that your open file ulimits are too low. Typically the 1024 default is simply not enough for web servers with moderate use.

A good way to test this is to use apache bench to simulate the inbound traffic you're seeing. Running this on a remote host will generate 1000 requests each over 10 concurrent connections.

ab -c 10 -n 1000 [http://]hostname[:port]/path

Now count the sockets on your web server using netstat...

netstat -a | grep -c 192.168.1.100

Hopefully what you'll find is that your sockets will plateau at some value not dramatically larger than 1024 (mine is at 16384).

Another good thing to ensure is that connections are being closed properly in your business logic.

netstat -a | grep -c CLOSE_WAIT

If you see this number continue to grow over the lifecycle of your application, you may be missing a few calls to Connection.close().