netty 真的有那么高并发吗?

会飞的爸爸 2014-09-29 09:58:57
今天看资料,把以前的一个通讯的移植到netty4下,1.5分钟处理10000个请求,CPU都占满了,不知道怎么搞的,用的长连接。
我以前的代码7秒左右10000个交易处理,CPU不会占100%,这相差也太悬殊了。这个netty是我用错了,还是虚传的?
...全文
41274 14 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
14 条回复
切换为时间正序
请发表友善的回复…
发表回复
dsddsa 2021-07-08
  • 打赏
  • 举报
回复

2000个客户端 每个发100万请求, tps 三万六 到 三万八

suke7521 2017-03-01
  • 打赏
  • 举报
回复
我开4个客户端,每个发送100万请求,大概tps在8000-10000之间。
blwinner 2016-03-01
  • 打赏
  • 举报
回复
创建工作线程的时候用的是默认,没有定义线程池大小,主线程也只开了一个
Sundayha 2015-10-19
  • 打赏
  • 举报
回复
引用 10 楼 Eleve 的回复:
楼主,恭喜你,应该是触发了epoll bug,相关链接:https://github.com/netty/netty/issues/327
这bug早就修复了
Eleve 2015-09-14
  • 打赏
  • 举报
回复
楼主,恭喜你,应该是触发了epoll bug,相关链接:https://github.com/netty/netty/issues/327
浮影1987 2014-11-20
  • 打赏
  • 举报
回复
线程池,1个线程只用一个cpu核(线程)
lwd650 2014-11-18
  • 打赏
  • 举报
回复
好几台机器用apache benchmark压的,加起来TPS都不到4000
lwd650 2014-11-18
  • 打赏
  • 举报
回复
我纳闷的是跑netty4.0的helloworld,始终压不满服务器CPU的样子,16核,只看到最高占用100%,按理说说应该有1600%的情况才对。
u010643937 2014-10-22
  • 打赏
  • 举报
回复
兄弟,找到原因了吗。我这里5000个都不稳定。cpu超过100%.
会飞的爸爸 2014-10-08
  • 打赏
  • 举报
回复
引用 2 楼 lsongiu8612 的回复:
代码写的有问题吧,netty简单封装了nio,不太可能有这种类似的问题
您好,我代码以发上去了,帮忙看看是啥问题导致的
会飞的爸爸 2014-10-08
  • 打赏
  • 举报
回复
引用 1 楼 mzmdh 的回复:
1个交易处理1个连接吗?如果是这样,那就不要用长连接啊,处理完就关了呗。如果不是这样,1个连接处理多个交易,那就是你程序的问题了,否则不会比你原来的性能差那么多。我没用过netty,但我用过nio,给你个建议,用线程池跑交易队列试试。
我的程序一个连接测试一个交易,做压力测试,就是测试netty处理能力有提高吗,如果netty的性能的确很高,就把以前的通讯代码移到netty框架下。我们以前的程序也是用java nio开发的,据说这个性能不错,就想移植过来看看。
会飞的爸爸 2014-10-08
  • 打赏
  • 举报
回复
我就是用demo那种方式写的,netty自带线程池的,我就没有用自己的线程处理了,而且占cpu资源太猛了,4个线程就能占CPU100%了,客户端测试基本不占CPU,CPU资源基都被netty框架占满了,这个是怎么回事呢? 代码如下: public class NettyServer { public static final ThreadLocal<PacketCrypt> session = new ThreadLocal<PacketCrypt>(); // MemoryAwareThreadPoolExecutor // OrderedMemoryAwareThreadPoolExecutor // ExecutionHandler executionHandler = new ExecutionHandler( // new MemoryAwareThreadPoolExecutor(16, 1048576, 1048576)) private static final ImmediateEventExecutor iee = ImmediateEventExecutor.INSTANCE; public static void startServer(int port,int workerCount) throws InterruptedException{ //取默认 EventLoopGroup bossGroup = new NioEventLoopGroup(1); EventLoopGroup workerGroup = new NioEventLoopGroup(); try { ServerBootstrap b = new ServerBootstrap(); b.group(bossGroup, workerGroup) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer<SocketChannel>() { @Override public void initChannel(SocketChannel ch) throws Exception { PacketCrypt pc = null; pc = session.get(); if (pc == null){ try { pc = new PacketCrypt(); pc.getCryptAES().importKey(PacketCrypt.INITKEY,PacketCrypt.INITIV); session.set(pc); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } ConnectionContext connectionContext = null; connectionContext = new ConnectionContext(pc); connectionContext.setChannel(ch); // ch.pipeline().addLast(new WriteTimeoutHandler(60)); // // 注册两个OutboundHandler,执行顺序为注册顺序的逆序,所以应该是OutboundHandler2 OutboundHandler1 // ch.pipeline().addLast(new MessageEncoder()); // 注册两个InboundHandler,执行顺序为注册顺序,所以应该是InboundHandler1 InboundHandler2 // ch.pipeline().addLast(new ReadTimeoutHandler(60)); ch.pipeline().addLast(ImmediateEventExecutor.INSTANCE,new MessageDecoder(connectionContext)); // ch.pipeline().addLast(new DepacketAdapter(connectionContext)); ch.pipeline().addLast(ImmediateEventExecutor.INSTANCE,new BusinessAdapter()); } }); b.option(ChannelOption.SO_BACKLOG, 1024) ; // b.childOption("child.reuseAddress", true); b.childOption(ChannelOption.TCP_NODELAY,true); b.childOption(ChannelOption.SO_KEEPALIVE,true); // b.childOption(ChannelOption.RCVBUF_ALLOCATOR, new FixedRecvByteBufAllocator(4096)); // Bind and start to accept incoming connections. ChannelFuture f = b.bind(port).sync(); TraceLog.info(" 系统启动完成."); // Wait until the server socket is closed. // In this example, this does not happen, but you can do that to gracefully // shut down your server. f.channel().closeFuture().sync(); } finally { workerGroup.shutdownGracefully(); bossGroup.shutdownGracefully(); } } } public class MessageDecoder extends ChannelInboundHandlerAdapter { private ConnectionContext connectionContext; public MessageDecoder(ConnectionContext connectionContext){ this.connectionContext = connectionContext; } @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { // SimpleChannelInboundHandler ByteBuf buf = (ByteBuf) msg; // System.out.println(buf.readableBytes()); // Make sure if the length field was received. if (buf.readableBytes() < 4) { // The length field was not received yet - return. // This method will be invoked again when more packets are // received and appended to the buffer. return ; } // The length field is in the buffer. // Mark the current buffer position before reading the length field // because the whole frame might not be in the buffer yet. // We will reset the buffer position to the marked position if // there's not enough bytes in the buffer. buf.markReaderIndex(); // Read the length field. byte[] lenbytes = new byte[4]; buf.readBytes(lenbytes); int length = ( (lenbytes[3] & 0xFF) << 24 ) | ( (lenbytes[2] & 0xFF) << 16 ) | ( (lenbytes[1] & 0xFF) << 8 ) | ( (lenbytes[0] & 0xFF) ); // System.out.println(buf.readableBytes()); if (length < MINLEN){ buf.clear(); ctx.close(); return; } // Make sure if there's enough bytes in the buffer. if (buf.readableBytes() < length + 9) {// The whole bytes were not received yet - return null. buf.resetReaderIndex(); return; } try{ // There's enough bytes in the buffer. Read it. byte[] body = new byte[length]; byte[] sign = new byte[9]; // Successfully decoded a frame. Return the decoded frame. buf.readBytes(body); buf.readBytes(sign); MessagePacket mp = new MessagePacket(connectionContext, body, sign); connectionContext.incPackNum(); ctx.fireChannelRead(mp); }finally{ buf.release(); } } private final static int MINLEN = "{\"ver\":\"1\",\"av\":\"1\",\"cf\":10001,\"tc\":\"tc\",\"md\":\"md\"}".length(); } public class BusinessAdapter extends ChannelInboundHandlerAdapter { private MessagePacket mp; private PacketCrypt pc = null; private TradeContextImp tradeContextImp = null; private SocketChannel sc = null; public BusinessAdapter() throws Exception{ pc = new PacketCrypt(); } private void doExecuteMessage(){ sc = mp.getConnectionContext().getChannel(); //如果连接已关闭则不处理该包 if (!sc.isActive()) return; //解析包体 try { tradeContextImp = new TradeContextImp(pc,mp); mp.getConnectionContext().setKeepAlive(false); //处理业务 byte[] resp = doExecute(); if (!sc.isActive()) return; ChannelHandlerContext chc = mp.getConnectionContext().getChannelHandlerContext(); ByteBuf bb = chc.alloc().directBuffer(resp.length); bb.writeBytes(resp); //如果连接已关闭则不处理该包 chc.writeAndFlush(bb); // chc.fireChannelWritabilityChanged(); } catch (PacketException e) { sc.close(); } catch (IOException e) { sc.close(); } } /** * 交易接口参数类型数组定义 */ private static Class<?>[] invokeParamsClass = new Class<?>[]{TradeContext.class}; @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { // System.out.println("enter business adapter time: " + new Date()); mp = (MessagePacket)msg; mp.getConnectionContext().setChannelHandlerContext(ctx); doExecuteMessage(); } @Override public void channelReadComplete(ChannelHandlerContext ctx) throws Exception { // ctx.flush(); } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { ctx.close(); } }
lsongiu8612 2014-09-30
  • 打赏
  • 举报
回复
代码写的有问题吧,netty简单封装了nio,不太可能有这种类似的问题
mzmdh 2014-09-30
  • 打赏
  • 举报
回复
1个交易处理1个连接吗?如果是这样,那就不要用长连接啊,处理完就关了呗。如果不是这样,1个连接处理多个交易,那就是你程序的问题了,否则不会比你原来的性能差那么多。我没用过netty,但我用过nio,给你个建议,用线程池跑交易队列试试。

51,397

社区成员

发帖
与我相关
我的任务
社区描述
Java相关技术讨论
javaspring bootspring cloud 技术论坛(原bbs)
社区管理员
  • Java相关社区
  • 小虚竹
  • 谙忆
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧