1. 程式人生 > >TCP粘包拆包基本解決方案

TCP粘包拆包基本解決方案

scu fonts println mar 是我 perf throws 自己 切割

上個小節我們淺析了在Netty的使用的時候TCP的粘包和拆包的現象,Netty對此問題提供了相對比較豐富的解決方案

Netty提供了幾個常用的解碼器,幫助我們解決這些問題,其實上述的粘包和拆包的問題,歸根結底的解決方案就是發送端給遠程端一個標記,告訴遠程端,每個信息的結束標誌是什麽,這樣,遠程端獲取到數據後,根據跟發送端約束的標誌,將接收的信息分切或者合並成我們需要的信息,這樣我們就可以獲取到正確的信息了

例如,我們剛才的例子中,我們可以在發送的信息中,加一個結束標誌,例如兩個遠程端規定以行來切分數據,那麽發送端,就需要在每個信息體的末尾加上行結束的標誌,部分代碼如下:

修改BaseClientHandler的req的構造:

[java] view plain copy
  1. public BaseClientHandler() {
  2. // req = ("BazingaLyncc is learner").getBytes();
  3. req = ("In this chapter you general, we recommend Java Concurrency in Practice by Brian Goetz. His book w"
  4. + "ill give We’ve reached an exciting point—in the next chapter we’ll discuss bootstrapping, the process "
  5. + "of configuring and connecting all of Netty’s components to bring your learned about threading models in ge"
  6. + "neral and Netty’s threading model in particular, whose performance and consistency advantages we discuss"
  7. + "ed in detail In this chapter you general, we recommend Java Concurrency in Practice by Brian Goetz. Hi"
  8. + "s book will give We’ve reached an exciting point—in the next chapter we’ll discuss bootstrapping, the"
  9. + " process of configuring and connecting all of Netty’s components to bring your learned about threading "
  10. + "models in general and Netty’s threading model in particular, whose performance and consistency advantag"
  11. + "es we discussed in detailIn this chapter you general, we recommend Java Concurrency in Practice by Bri"
  12. + "an Goetz. His book will give We’ve reached an exciting point—in the next chapter;the counter is: 1 2222"
  13. + "sdsa ddasd asdsadas dsadasdas" + System.getProperty("line.separator")).getBytes();
  14. }

我們在我們巨長的req中末尾加了System.getProperty("line.separator"),這樣相當於給req打了一個標記

打完標記,其實我們這個示例中的server中還不知道是以行為結尾的,所以我們需要修改server的handler鏈,在inbound鏈中加一個額外的處理鏈,判斷一下,獲取的信息按照行來切分,我們很慶幸,這樣枯燥的代碼Netty已經幫我們完美地完成了,Netty提供了一個LineBasedFrameDecoder這個類,顧名思義,這個類名字中有decoder,說明是一個解碼器,我們再看看它的詳細聲明:

[java] view plain copy
  1. /**
  2. * A decoder that splits the received {@link ByteBuf}s on line endings.
  3. * <p>
  4. * Both {@code "\n"} and {@code "\r\n"} are handled.
  5. * For a more general delimiter-based decoder, see {@link DelimiterBasedFrameDecoder}.
  6. */
  7. public class LineBasedFrameDecoder extends ByteToMessageDecoder {
  8. /** Maximum length of a frame we‘re willing to decode. */
  9. private final int maxLength;
  10. /** Whether or not to throw an exception as soon as we exceed maxLength. */
  11. private final boolean failFast;
  12. private final boolean stripDelimiter;
  13. /** True if we‘re discarding input because we‘re already over maxLength. */
  14. private boolean discarding;
  15. private int discardedBytes;

它是繼承ByteToMessageDecoder的,是將byte類型轉化成Message的,所以我們應該將這個解碼器放在inbound處理器鏈的第一個,所以我們修改一下Server端的啟動代碼:

[java] view plain copy
  1. package com.lyncc.netty.stickpackage.myself;
  2. import io.netty.bootstrap.ServerBootstrap;
  3. import io.netty.channel.ChannelFuture;
  4. import io.netty.channel.ChannelInitializer;
  5. import io.netty.channel.ChannelOption;
  6. import io.netty.channel.EventLoopGroup;
  7. import io.netty.channel.nio.NioEventLoopGroup;
  8. import io.netty.channel.socket.SocketChannel;
  9. import io.netty.channel.socket.nio.NioServerSocketChannel;
  10. import io.netty.handler.codec.LineBasedFrameDecoder;
  11. import io.netty.handler.codec.string.StringDecoder;
  12. import java.net.InetSocketAddress;
  13. public class BaseServer {
  14. private int port;
  15. public BaseServer(int port) {
  16. this.port = port;
  17. }
  18. public void start(){
  19. EventLoopGroup bossGroup = new NioEventLoopGroup(1);
  20. EventLoopGroup workerGroup = new NioEventLoopGroup();
  21. try {
  22. ServerBootstrap sbs = new ServerBootstrap().group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).localAddress(new InetSocketAddress(port))
  23. .childHandler(new ChannelInitializer<SocketChannel>() {
  24. protected void initChannel(SocketChannel ch) throws Exception {
  25. ch.pipeline().addLast(new LineBasedFrameDecoder(2048));
  26. ch.pipeline().addLast(new StringDecoder());
  27. ch.pipeline().addLast(new BaseServerHandler());
  28. };
  29. }).option(ChannelOption.SO_BACKLOG, 128)
  30. .childOption(ChannelOption.SO_KEEPALIVE, true);
  31. // 綁定端口,開始接收進來的連接
  32. ChannelFuture future = sbs.bind(port).sync();
  33. System.out.println("Server start listen at " + port );
  34. future.channel().closeFuture().sync();
  35. } catch (Exception e) {
  36. bossGroup.shutdownGracefully();
  37. workerGroup.shutdownGracefully();
  38. }
  39. }
  40. public static void main(String[] args) throws Exception {
  41. int port;
  42. if (args.length > 0) {
  43. port = Integer.parseInt(args[0]);
  44. } else {
  45. port = 8080;
  46. }
  47. new BaseServer(port).start();
  48. }
  49. }

這樣,我們只是在initChannel方法中增加了一個LineBasedFrameDecoder這個類,其中2048是規定一行數據最大的字節數

我們再次運行,我們再看看效果:

技術分享圖片

可以看到客戶端發送的兩次msg,被服務器端成功地兩次接收了,我們要的效果達到了

我們將LineBasedFrameDecoder中的2048參數,縮小一半,變成1024,我們再看看效果:

技術分享圖片

出現了異常,這個異常時TooLongFrameException,這個異常在Netty in Action中介紹過,幀的大小太大,在我們這個場景中,就是我們發送的一行信息大小是1076,大於了我們規定的1024所以報錯了

我們再解決另一個粘包的問題,我們可以看到上節中介紹的那個粘包案例中,我們發送了100次的信息“BazingaLyncc is learner”,這個案例很特殊,這個信息是一個特長的數據,字節長度是23,所以我們可以使用Netty為我們提供的FixedLengthFrameDecoder這個解碼器,看到這個名字就明白了大半,定長數據幀的解碼器,所以我們修改一下代碼:

BaseClientHandler:

[java] view plain copy
  1. package com.lyncc.netty.stickpackage.myself;
  2. import io.netty.buffer.ByteBuf;
  3. import io.netty.buffer.Unpooled;
  4. import io.netty.channel.ChannelHandlerContext;
  5. import io.netty.channel.ChannelInboundHandlerAdapter;
  6. public class BaseClientHandler extends ChannelInboundHandlerAdapter{
  7. private byte[] req;
  8. public BaseClientHandler() {
  9. req = ("BazingaLyncc is learner").getBytes();
  10. // req = ("In this chapter you general, we recommend Java Concurrency in Practice by Brian Goetz. His book w"
  11. // + "ill give We’ve reached an exciting point—in the next chapter we’ll discuss bootstrapping, the process "
  12. // + "of configuring and connecting all of Netty’s components to bring your learned about threading models in ge"
  13. // + "neral and Netty’s threading model in particular, whose performance and consistency advantages we discuss"
  14. // + "ed in detail In this chapter you general, we recommend Java Concurrency in Practice by Brian Goetz. Hi"
  15. // + "s book will give We’ve reached an exciting point—in the next chapter we’ll discuss bootstrapping, the"
  16. // + " process of configuring and connecting all of Netty’s components to bring your learned about threading "
  17. // + "models in general and Netty’s threading model in particular, whose performance and consistency advantag"
  18. // + "es we discussed in detailIn this chapter you general, we recommend Java Concurrency in Practice by Bri"
  19. // + "an Goetz. His book will give We’ve reached an exciting point—in the next chapter;the counter is: 1 2222"
  20. // + "sdsa ddasd asdsadas dsadasdas" + System.getProperty("line.separator")).getBytes();
  21. }
  22. @Override
  23. public void channelActive(ChannelHandlerContext ctx) throws Exception {
  24. ByteBuf message = null;
  25. for (int i = 0; i < 100; i++) {
  26. message = Unpooled.buffer(req.length);
  27. message.writeBytes(req);
  28. ctx.writeAndFlush(message);
  29. }
  30. // message = Unpooled.buffer(req.length);
  31. // message.writeBytes(req);
  32. // ctx.writeAndFlush(message);
  33. // message = Unpooled.buffer(req.length);
  34. // message.writeBytes(req);
  35. // ctx.writeAndFlush(message);
  36. }
  37. @Override
  38. public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
  39. ctx.close();
  40. }
  41. }

BaseServer:

[java] view plain copy
  1. package com.lyncc.netty.stickpackage.myself;
  2. import io.netty.bootstrap.ServerBootstrap;
  3. import io.netty.channel.ChannelFuture;
  4. import io.netty.channel.ChannelInitializer;
  5. import io.netty.channel.ChannelOption;
  6. import io.netty.channel.EventLoopGroup;
  7. import io.netty.channel.nio.NioEventLoopGroup;
  8. import io.netty.channel.socket.SocketChannel;
  9. import io.netty.channel.socket.nio.NioServerSocketChannel;
  10. import io.netty.handler.codec.FixedLengthFrameDecoder;
  11. import io.netty.handler.codec.string.StringDecoder;
  12. import java.net.InetSocketAddress;
  13. public class BaseServer {
  14. private int port;
  15. public BaseServer(int port) {
  16. this.port = port;
  17. }
  18. public void start(){
  19. EventLoopGroup bossGroup = new NioEventLoopGroup(1);
  20. EventLoopGroup workerGroup = new NioEventLoopGroup();
  21. try {
  22. ServerBootstrap sbs = new ServerBootstrap().group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).localAddress(new InetSocketAddress(port))
  23. .childHandler(new ChannelInitializer<SocketChannel>() {
  24. protected void initChannel(SocketChannel ch) throws Exception {
  25. ch.pipeline().addLast(new FixedLengthFrameDecoder(23));
  26. ch.pipeline().addLast(new StringDecoder());
  27. ch.pipeline().addLast(new BaseServerHandler());
  28. };
  29. }).option(ChannelOption.SO_BACKLOG, 128)
  30. .childOption(ChannelOption.SO_KEEPALIVE, true);
  31. // 綁定端口,開始接收進來的連接
  32. ChannelFuture future = sbs.bind(port).sync();
  33. System.out.println("Server start listen at " + port );
  34. future.channel().closeFuture().sync();
  35. } catch (Exception e) {
  36. bossGroup.shutdownGracefully();
  37. workerGroup.shutdownGracefully();
  38. }
  39. }
  40. public static void main(String[] args) throws Exception {
  41. int port;
  42. if (args.length > 0) {
  43. port = Integer.parseInt(args[0]);
  44. } else {
  45. port = 8080;
  46. }
  47. new BaseServer(port).start();
  48. }
  49. }

我們就是在channelhandler鏈中,加入了FixedLengthFrameDecoder,且參數是23,告訴Netty,獲取的幀數據有23個字節就切分一次

運行結果:

技術分享圖片

技術分享圖片

可以看見,我們獲取到了我們想要的效果

當然Netty還提供了一些其他的解碼器,有他們自己的使用場景,例如有按照某個固定字符切分的DelimiterBasedFrameDecoder的解碼器

我們再次修改代碼:

BaseClientHandler.java

[html] view plain copy
  1. package com.lyncc.netty.stickpackage.myself;
  2. import io.netty.buffer.ByteBuf;
  3. import io.netty.buffer.Unpooled;
  4. import io.netty.channel.ChannelHandlerContext;
  5. import io.netty.channel.ChannelInboundHandlerAdapter;
  6. public class BaseClientHandler extends ChannelInboundHandlerAdapter{
  7. private byte[] req;
  8. public BaseClientHandler() {
  9. // req = ("BazingaLyncc is learner").getBytes();
  10. req = ("In this chapter you general, we recommend Java Concurrency in Practice by Brian Goetz. $$__ His book w"
  11. + "ill give We’ve reached an exciting point—in the next chapter we’ll $$__ discuss bootstrapping, the process "
  12. + "of configuring and connecting all of Netty’s components to bring $$__ your learned about threading models in ge"
  13. + "neral and Netty’s threading model in particular, whose performance $$__ and consistency advantages we discuss"
  14. + "ed in detail In this chapter you general, we recommend Java $$__Concurrency in Practice by Brian Goetz. Hi"
  15. + "s book will give We’ve reached an exciting point—in the next $$__ chapter we’ll discuss bootstrapping, the"
  16. + " process of configuring and connecting all of Netty’s components $$__ to bring your learned about threading "
  17. + "models in general and Netty’s threading model in particular, $$__ whose performance and consistency advantag"
  18. + "es we discussed in detailIn this chapter you general, $$__ we recommend Java Concurrency in Practice by Bri"
  19. + "an Goetz. His book will give We’ve reached an exciting $$__ point—in the next chapter;the counter is: 1 2222"
  20. + "sdsa ddasd asdsadas dsadasdas" + System.getProperty("line.separator")).getBytes();
  21. }
  22. @Override
  23. public void channelActive(ChannelHandlerContext ctx) throws Exception {
  24. ByteBuf message = null;
  25. // for (int i = 0; i < 100; i++) {
  26. // message = Unpooled.buffer(req.length);
  27. // message.writeBytes(req);
  28. // ctx.writeAndFlush(message);
  29. // }
  30. message = Unpooled.buffer(req.length);
  31. message.writeBytes(req);
  32. ctx.writeAndFlush(message);
  33. message = Unpooled.buffer(req.length);
  34. message.writeBytes(req);
  35. ctx.writeAndFlush(message);
  36. }
  37. @Override
  38. public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
  39. ctx.close();
  40. }
  41. }

我們在req的字符串中增加了“$$__”這樣的切割符,然後再Server中照例增加一個DelimiterBasedFrameDecoder,來切割字符串:

[html] view plain copy
  1. ServerBootstrap sbs = new ServerBootstrap().group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).localAddress(new InetSocketAddress(port))
  2. .childHandler(new ChannelInitializer<SocketChannel>() {
  3. protected void initChannel(SocketChannel ch) throws Exception {
  4. ch.pipeline().addLast(new DelimiterBasedFrameDecoder(1024,Unpooled.copiedBuffer("$$__".getBytes())));
  5. ch.pipeline().addLast(new StringDecoder());
  6. ch.pipeline().addLast(new BaseServerHandler());
  7. };
  8. }).option(ChannelOption.SO_BACKLOG, 128)
  9. .childOption(ChannelOption.SO_KEEPALIVE, true);

我們在initChannel中第一個inbound中增加了DelimiterBasedFrameDecoder,且規定切割符就是“$$__”,這樣就能正常切割了,我們看看運行效果:

技術分享圖片

可以看到被分了20次讀取,我們可以這樣理解,客戶端發送了2次req字節,每個req中有10個“$$__”,這樣就是第11次切割的時候其實發送了粘包,第一個req中末尾部分和第二次的頭部粘在了一起,作為第11部分的內容

而最後一部分的內容因為沒有"$$__"切割,所以沒有打印在控制臺上~

其實這類的Handler還是相對比較簡單的,真實的生產環境這些decoder只是作為比較基本的切分類,但是這些decoder還是很好用的~

希望講的對您有所幫助~END~

TCP粘包拆包基本解決方案