latency in netty due to passing requests from boss thread to worker thread?

WorM picture WorM · Dec 28, 2011 · Viewed 8.3k times · Source

I have some questions about Netty (Server Side), TCP/IP applications;

I am wondering if there can be latency because of netty (due to missing configuration etc.) while passing the request from boss thread to worker thread ?

I am using :

new OrderedMemoryAwareThreadPoolExecutor(350, 0, 0, 1, TimeUnit.SECONDS);

Actually, I set max thread count 350 as I am not sure about the optimal number. I log simultaneous working thread count every minute and it seems that average is too low (barely exceeds 10). So I will decrease this number as it is not required.

Is there any other parameters,important points that I should be aware of for to get best performance ?

bootstrap.setOption("tcpNoDelay", true); - Is there any disadvantage of setting this parameter? Considering that delivery time is very important.

Thread Pool Executer:

OrderedMemoryAwareThreadPoolExecutor executor = new OrderedMemoryAwareThreadPoolExecutor(48, 0, 0, 1, TimeUnit.SECONDS);

Here is my pipeline factory:

    ChannelPipeline pipeline = pipeline();
    pipeline.addLast("frameDecoder", new DelimiterBasedFrameDecoder(GProperties.getIntProperty("decoder.maxFrameLength", 8000 * 1024), Delimiters.nulDelimiter()));
    pipeline.addLast("stringDecoder", new StringDecoder( CharsetUtil.UTF_8 ));      
    pipeline.addLast("frameEncoder", new NullTermMessageEncoder());
    pipeline.addLast("stringEncoder", new JSONEncoder( CharsetUtil.UTF_8 ));
        pipeline.addLast("timeout", new IdleStateHandler(idleTimer, 42 , 0, 0));
    pipeline.addLast("executor", new ExecutionHandler(executor));
    pipeline.addLast("handler", new GServerHandler());

and the ServerBootstrap:

gServerBootstrap = new ServerBootstrap(new NioServerSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()));
        gServerBootstrap.setPipelineFactory(new GServerPipelineFactory());
                gServerBootstrap.setOption("backlog", 8129);
                gServerBootstrap.setOption("child.tcpNoDelay", true);
        gServerBootstrap.bind(new InetSocketAddress(GProperties.getIntProperty("server.port", 7679)));

What can you suggest for this configuration ?

Answer

Jestan Nirojan picture Jestan Nirojan · Dec 28, 2011

Netty Boss threads are only used to setup connection, worker threads are used to run NioWorker (non blocking read/write) or OioWorker (blocking read/write).

If you have an execution handler, worker thread will submit the message event to OrderedMemoryAwareThreadPoolExecutor.

1) Increasing the Netty I/O worker thread count to more than number of processors * 2 won't help. If you are using staged executors, Having more than one staged execution handler for non I/O tasks, may increase latency.

Note: Its better to set your own ObjectSizeEstimator implementation in OMTPE constructor, because many CPU cycles are spent on calculating used channel memory.

2) There are some other Netty parameters you can try

   //setting buffer size can improve I/O
   bootstrap.setOption("child.sendBufferSize", 1048576);
   bootstrap.setOption("child.receiveBufferSize", 1048576); 

   // better to have an receive buffer predictor 
   bootstrap.setOption("receiveBufferSizePredictorFactory", new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE))  

   //if the server is sending 1000 messages per sec, optimum write buffer water marks will
   //prevent unnecessary throttling, Check NioSocketChannelConfig doc   
   bootstrap.setOption("writeBufferLowWaterMark", 32 * 1024);
   bootstrap.setOption("writeBufferHighWaterMark", 64 * 1024);

3) It should be bootstrap.setOption("child.tcpNoDelay", true) for server bootstrap.

There is an experimental hidden parameter:

Netty NioWorker is using SelectorUtil.select to wait for selector events, the wait time is hard coded in SelectorUtil,

selector.select(500);

setting a small value gave better performance in netty sctp transport implementation. Not sure about TCP.