Java Netty load testing issues
I wrote the server that accepts connection and bombards messages ( ~100 bytes ) using text protocol and my implementation is able to send about loopback 400K/sec messages with the 3rt party client. I picked Netty for this task, SUSE 11 RealTime, JRockit RTS. But when I started developing my own client based on Netty I faced drastic throughput reduction ( down from 400K to 1.3K msg/sec ). The code of the client is pretty straightforward. Could you, please, give an advice or show examples how to write much more effective client. I,actually, more care about latency, but started with throughput tests and I don't think that it is normal to have 1.5Kmsg/sec on loopback. PS client purpose is only receiving messages from server and very seldom send heartbits.
Client.java
public class Client {
private static ClientBootstrap bootstrap;
private static Channel connector;
public static boolean start()
{
ChannelFactory factory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ExecutionHandler executionHandler = new ExecutionHandler( new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));
bootstrap = new ClientBootstrap(factory);
bootstrap.setPipelineFactory( new ClientPipelineFactory() );
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setOption("receiveBufferSize", 1048576);
ChannelFuture future = bootstrap
.connect(new InetSocketAddress("localhost", 9013));
if (!future.awaitUninterruptibly().isSuccess()) {
System.out.println("--- CLIENT - Failed to connect to server at " +
"localhost:9013.");
bootstrap.releaseExternalResources();
return false;
}
connector = future.getChannel();
return connector.isConnected();
}
public static void main( String[] args )
{
boolean started = start();
if ( started )
System.out.println( "Client connected to the server" );
}
}
ClientPipelineFactory.java
public class ClientPipelineFactory implements ChannelPipelineFactory{
private final ExecutionHandler executionHandler;
public ClientPipelineFactory( ExecutionHandler executionHandle )
{
this.executionHandler = executionHandle;
}
@Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = pipeline();
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
1024, Delimiters.lineDelimiter()));
pipeline.addLast( "executor", executionHandler);
pipeline.addLast("handler", new MessageHandler() );
return pipeline;
}
}
MessageHandler.java
public class MessageHandler extends SimpleChannelHandler{
long max_msg = 10000;
long cur_msg = 0;
long startTime = System.nanoTime();
@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
cur_msg++;
if ( cur_msg == max_msg )
{
System.out.println( "Throughput (msg/sec) : " + max_msg* NANOS_IN_SEC/( System.nanoTime() - startTime ) );
cur_msg = 0;
startTime = System.nanoTime();
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
Update. On the server side there is a periodic thread that writes to the accepted client channel. And the channel soon become unwritable. Update N2. Added OrderedMemoryAwareExecutor in the pipeline, but still there is very low throughput ( about 4k msg/sec )
Fixed. I put executor in front of the whole pipeline stack and it worked out!
If the server is sending messages with a fixed size (~100 bytes), you can set the ReceiveBufferSizePredictor to the client bootstrap, this will optimize the read
bootstrap.setOption("receiveBufferSizePredictorFactory",
new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE));
According to the code segment you have posted: The client's nio worker thread is doing everything in the pipeline, so it will be busy with decoding and executing the message handlers. You have to add a execution handler.
You have said that, channel is becoming unwritable from server side, so you may have to adjust the watermark sizes in the server bootstrap. you can periodically monitor the write buffer size (write queue size) and make sure that channel is becoming unwritable because of messages can not written to the network. It can be done by having a util class like below.
package org.jboss.netty.channel.socket.nio;
import org.jboss.netty.channel.Channel;
public final class NioChannelUtil {
public static long getWriteTaskQueueCount(Channel channel) {
NioSocketChannel nioChannel = (NioSocketChannel) channel;
return nioChannel.writeBufferSize.get();
}
}
链接地址: http://www.djcxy.com/p/95212.html
上一篇: Netty比Tomcat慢
下一篇: Java Netty负载测试问题