netty底层原理(Netty原理解析与开发实战)

★★★建议星标我们★★★

netty底层原理(Netty原理解析与开发实战)netty底层原理(Netty原理解析与开发实战)

2020年Java原创面试题库连载中

【000期】Java最全面试题库思维导图

【020期】JavaSE系列面试题汇总(共18篇)

【028期】JavaWeb系列面试题汇总(共10篇)

【042期】JavaEE系列面试题汇总(共13篇)

【049期】数据库系列面试题汇总(共6篇)

【053期】中间件系列面试题汇总(共3篇)

【065期】数据结构与算法面试题汇总(共11篇)

【076期】分布式面试题汇总(共10篇)

【077期】综合面试题系列(一)

【078期】综合面试题系列(二)

【079期】综合面试题系列(三)

【080期】综合面试题系列(四)

【081期】综合面试题系列(五)

【082期】综合面试题系列(六)

【083期】综合面试题系列(七)

【084期】综合面试题系列(八)

【085期】综合面试题系列(九)

【086期】综合面试题系列(十)

【087期】综合面试题系列(十一)

【088期】综合面试题系列(十二)

【089期】综合面试题系列(十三)

更多内容,点击上面蓝字查看netty底层原理(Netty原理解析与开发实战)

心跳机制

何为心跳

所谓心跳, 即在TCP长连接中, 客户端和服务器之间定期发送的一种特殊的数据包, 通知对方自己还在线, 以确保TCP连接的有效性.

注:心跳包还有另一个作用,经常被忽略,即:一个连接如果长时间不用,防火墙或者路由器就会断开该连接

如何实现

核心Handler —— IdleStateHandler

Netty中, 实现心跳机制的关键是IdleStateHandler, 那么这个Handler如何使用呢? 先看下它的构造器:

public IdleStateHandler(int readerIdleTimeSeconds, int writerIdleTimeSeconds, int allIdleTimeSeconds) {<br>this((long)readerIdleTimeSeconds, (long)writerIdleTimeSeconds, (long)allIdleTimeSeconds, TimeUnit.SECONDS);<br>}<br>

这里解释下三个参数的含义:

  • readerIdleTimeSeconds: 读超时. 即当在指定的时间间隔内没有从 Channel读取到数据时, 会触发一个READER_IDLEIdleStateEvent事件.
  • writerIdleTimeSeconds: 写超时. 即当在指定的时间间隔内没有数据写入到 Channel时, 会触发一个WRITER_IDLEIdleStateEvent事件.
  • allIdleTimeSeconds: 读/写超时. 即当在指定的时间间隔内没有读或写操作时, 会触发一个 ALL_IDLEIdleStateEvent事件.

注:这三个参数默认的时间单位是。若需要指定其他时间单位,可以使用另一个构造方法:IdleStateHandler(boolean observeOutput, long readerIdleTime, long writerIdleTime, long allIdleTime, TimeUnit unit)

在看下面的实现之前,建议先了解一下IdleStateHandler的实现原理。

下面直接上代码,需要注意的地方,会在代码中通过注释进行说明。

使用IdleStateHandler实现心跳

下面将使用IdleStateHandler来实现心跳,Client端连接到Server端后,会循环执行一个任务:随机等待几秒,然后ping一下Server端,即发送一个心跳包。当等待的时间超过规定时间,将会发送失败,以为Server端在此之前已经主动断开连接了。代码如下:

Client端

ClientIdleStateTrigger —— 心跳触发器

ClientIdleStateTrigger也是一个Handler,只是重写了userEventTriggered方法,用于捕获IdleState.WRITER_IDLE事件(未在指定时间内向服务器发送数据),然后向Server端发送一个心跳包。

/**<br>* <p><br>* 用于捕获{@link IdleState#WRITER_IDLE}事件(未在指定时间内向服务器发送数据),然后向<code>Server</code>端发送一个心跳包。<br>* </p><br>*/<br>public class ClientIdleStateTrigger extends ChannelInboundHandlerAdapter {<br><br>public static final String HEART_BEAT = "heart beat!";<br><br>@Override<br>public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {<br>if (evt instanceof IdleStateEvent) {<br>IdleState state = ((IdleStateEvent) evt).state;<br>if (state == IdleState.WRITER_IDLE) {<br>// write heartbeat to server<br>ctx.writeAndFlush(HEART_BEAT);<br>}<br>} else {<br>super.userEventTriggered(ctx, evt);<br>}<br>}<br><br>}<br>

Pinger —— 心跳发射器

/**<br>* <p>客户端连接到服务器端后,会循环执行一个任务:随机等待几秒,然后ping一下Server端,即发送一个心跳包。</p><br>*/<br>public class Pinger extends ChannelInboundHandlerAdapter {<br><br>private Random random = new Random;<br>private int baseRandom = 8;<br><br>private Channel channel;<br><br>@Override<br>public void channelActive(ChannelHandlerContext ctx) throws Exception {<br>super.channelActive(ctx);<br>this.channel = ctx.channel;<br><br>ping(ctx.channel);<br>}<br><br>private void ping(Channel channel) {<br>int second = Math.max(1, random.nextInt(baseRandom));<br>System.out.println("next heart beat will send after " + second + "s.");<br>ScheduledFuture<?> future = channel.eventLoop.schedule(new Runnable {<br>@Override<br>public void run {<br>if (channel.isActive) {<br>System.out.println("sending heart beat to the server...");<br>channel.writeAndFlush(ClientIdleStateTrigger.HEART_BEAT);<br>} else {<br>System.err.println("The connection had broken, cancel the task that will send a heart beat.");<br>channel.closeFuture;<br>throw new RuntimeException;<br>}<br>}<br>}, second, TimeUnit.SECONDS);<br><br>future.addListener(new GenericFutureListener {<br>@Override<br>public void operationComplete(Future future) throws Exception {<br>if (future.isSuccess) {<br>ping(channel);<br>}<br>}<br>});<br>}<br><br>@Override<br>public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {<br>// 当Channel已经断开的情况下, 仍然发送数据, 会抛异常, 该方法会被调用.<br>cause.printStackTrace;<br>ctx.close;<br>}<br>}<br>

ClientHandlersInitializer —— 客户端处理器集合的初始化类

public class ClientHandlersInitializer extends ChannelInitializer<SocketChannel> {<br><br>private ReconnectHandler reconnectHandler;<br>private EchoHandler echoHandler;<br><br>public ClientHandlersInitializer(TcpClient tcpClient) {<br>Assert.not(tcpClient, "TcpClient can not be .");<br>this.reconnectHandler = new ReconnectHandler(tcpClient);<br>this.echoHandler = new EchoHandler;<br>}<br><br>@Override<br>protected void initChannel(SocketChannel ch) throws Exception {<br>ChannelPipeline pipeline = ch.pipeline;<br>pipeline.addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4));<br>pipeline.addLast(new LengthFieldPrepender(4));<br>pipeline.addLast(new StringDecoder(CharsetUtil.UTF_8));<br>pipeline.addLast(new StringEncoder(CharsetUtil.UTF_8));<br>pipeline.addLast(new Pinger);<br>}<br>}<br>

注: 上面的Handler集合,除了Pinger,其他都是编解码器和解决粘包,可以忽略。

TcpClient —— TCP连接的客户端

public class TcpClient {<br><br>private String host;<br>private int port;<br>private Bootstrap bootstrap;<br>/** 将<code>Channel</code>保存起来, 可用于在其他非handler的地方发送数据 */<br>private Channel channel;<br><br>public TcpClient(String host, int port) {<br>this(host, port, new ExponentialBackOffRetry(1000, Integer.MAX_VALUE, 60 * 1000));<br>}<br><br>public TcpClient(String host, int port, RetryPolicy retryPolicy) {<br>this.host = host;<br>this.port = port;<br>init;<br>}<br><br>/**<br>* 向远程TCP服务器请求连接<br>*/<br>public void connect {<br>synchronized (bootstrap) {<br>ChannelFuture future = bootstrap.connect(host, port);<br>this.channel = future.channel;<br>}<br>}<br><br>private void init {<br>EventLoopGroup group = new NioEventLoopGroup;<br>// bootstrap 可重用, 只需在TcpClient实例化的时候初始化即可.<br>bootstrap = new Bootstrap;<br>bootstrap.group(group)<br>.channel(NioSocketChannel.class)<br>.handler(new ClientHandlersInitializer(TcpClient.this));<br>}<br><br>public static void main(String[] args) {<br>TcpClient tcpClient = new TcpClient("localhost", 2222);<br>tcpClient.connect;<br>}<br><br>}<br>

Server端

ServerIdleStateTrigger —— 断连触发器

/**<br>* <p>在规定时间内未收到客户端的任何数据包, 将主动断开该连接</p><br>*/<br>public class ServerIdleStateTrigger extends ChannelInboundHandlerAdapter {<br>@Override<br>public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {<br>if (evt instanceof IdleStateEvent) {<br>IdleState state = ((IdleStateEvent) evt).state;<br>if (state == IdleState.READER_IDLE) {<br>// 在规定时间内没有收到客户端的上行数据, 主动断开连接<br>ctx.disconnect;<br>}<br>} else {<br>super.userEventTriggered(ctx, evt);<br>}<br>}<br>}<br>

ServerBizHandler —— 服务器端的业务处理器

/**<br>* <p>收到来自客户端的数据包后, 直接在控制台打印出来.</p><br>*/<br>@ChannelHandler.Sharable<br>public class ServerBizHandler extends SimpleChannelInboundHandler<String> {<br><br>private final String REC_HEART_BEAT = "I had received the heart beat!";<br><br>@Override<br>protected void channelRead0(ChannelHandlerContext ctx, String data) throws Exception {<br>try {<br>System.out.println("receive data: " + data);<br>// ctx.writeAndFlush(REC_HEART_BEAT);<br>} catch (Exception e) {<br>e.printStackTrace;<br>}<br>}<br><br>@Override<br>public void channelActive(ChannelHandlerContext ctx) throws Exception {<br>System.out.println("Established connection with the remote client.");<br><br>// do something<br><br>ctx.fireChannelActive;<br>}<br><br>@Override<br>public void channelInactive(ChannelHandlerContext ctx) throws Exception {<br>System.out.println("Disconnected with the remote client.");<br><br>// do something<br><br>ctx.fireChannelInactive;<br>}<br><br>@Override<br>public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {<br>cause.printStackTrace;<br>ctx.close;<br>}<br>}<br>

ServerHandlerInitializer —— 服务器端处理器集合的初始化类

/**<br>* <p>用于初始化服务器端涉及到的所有<code>Handler</code></p><br>*/<br>public class ServerHandlerInitializer extends ChannelInitializer<SocketChannel> {<br><br>protected void initChannel(SocketChannel ch) throws Exception {<br>ch.pipeline.addLast("idleStateHandler", new IdleStateHandler(5, 0, 0));<br>ch.pipeline.addLast("idleStateTrigger", new ServerIdleStateTrigger);<br>ch.pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4));<br>ch.pipeline.addLast("frameEncoder", new LengthFieldPrepender(4));<br>ch.pipeline.addLast("decoder", new StringDecoder);<br>ch.pipeline.addLast("encoder", new StringEncoder);<br>ch.pipeline.addLast("bizHandler", new ServerBizHandler);<br>}<br><br>}<br>

注:new IdleStateHandler(5, 0, 0)handler代表如果在5秒内没有收到来自客户端的任何数据包(包括但不限于心跳包),将会主动断开与该客户端的连接。

TcpServer —— 服务器端

public class TcpServer {<br>private int port;<br>private ServerHandlerInitializer serverHandlerInitializer;<br><br>public TcpServer(int port) {<br>this.port = port;<br>this.serverHandlerInitializer = new ServerHandlerInitializer;<br>}<br><br>public void start {<br>EventLoopGroup bossGroup = new NioEventLoopGroup(1);<br>EventLoopGroup workerGroup = new NioEventLoopGroup;<br>try {<br>ServerBootstrap bootstrap = new ServerBootstrap;<br>bootstrap.group(bossGroup, workerGroup)<br>.channel(NioServerSocketChannel.class)<br>.childHandler(this.serverHandlerInitializer);<br>// 绑定端口,开始接收进来的连接<br>ChannelFuture future = bootstrap.bind(port).sync;<br><br>System.out.println("Server start listen at " + port);<br>future.channel.closeFuture.sync;<br>} catch (Exception e) {<br>bossGroup.shutdownGracefully;<br>workerGroup.shutdownGracefully;<br>e.printStackTrace;<br>}<br>}<br><br>public static void main(String[] args) throws Exception {<br>int port = 2222;<br>new TcpServer(port).start;<br>}<br>}<br>

至此,所有代码已经编写完毕。

测试

首先启动客户端,再启动服务器端。启动完成后,在客户端的控制台上,可以看到打印如下类似日志:

netty底层原理(Netty原理解析与开发实战)

客户端控制台输出的日志

在服务器端可以看到控制台输出了类似如下的日志:

netty底层原理(Netty原理解析与开发实战)

服务器端控制台输出的日志

可以看到,客户端在发送4个心跳包后,第5个包因为等待时间较长,等到真正发送的时候,发现连接已断开了;而服务器端收到客户端的4个心跳数据包后,迟迟等不到下一个数据包,所以果断断开该连接。

在测试过程中,有可能会出现如下情况:

netty底层原理(Netty原理解析与开发实战)

异常情况

出现这种情况的原因是:在连接已断开的情况下,仍然向服务器端发送心跳包。虽然在发送心跳包之前会使用判断连接是否可用,但也有可能上一刻判断结果为可用,但下一刻发送数据包之前,连接就断了。

目前尚未找到优雅处理这种情况的方案,各位看官如果有好的解决方案,还望不吝赐教。拜谢!!!

断线重连

断线重连这里就不过多介绍,相信各位都知道是怎么回事。这里只说大致思路,然后直接上代码。

实现思路

客户端在监测到与服务器端的连接断开后,或者一开始就无法连接的情况下,使用指定的重连策略进行重连操作,直到重新建立连接或重试次数耗尽。

对于如何监测连接是否断开,则是通过重写ChannelInboundHandler#channelInactive来实现,但连接不可用,该方法会被触发,所以只需要在该方法做好重连工作即可。

代码实现

注:以下代码都是在上面心跳机制的基础上修改/添加的。

因为断线重连是客户端的工作,所以只需对客户端代码进行修改。

重试策略

RetryPolicy —— 重试策略接口

public interface RetryPolicy {<br><br>/**<br>* Called when an operation has failed for some reason. This method should return<br>* true to make another attempt.<br>*<br>* @param retryCount the number of times retried so far (0 the first time)<br>* @return true/false<br>*/<br>boolean allowRetry(int retryCount);<br><br>/**<br>* get sleep time in ms of current retry count.<br>*<br>* @param retryCount current retry count<br>* @return the time to sleep<br>*/<br>long getSleepTimeMs(int retryCount);<br>}<br>

ExponentialBackOffRetry —— 重连策略的默认实现

/**<br>* <p>Retry policy that retries a set number of times with increasing sleep time between retries</p><br>*/<br>public class ExponentialBackOffRetry implements RetryPolicy {<br><br>private static final int MAX_RETRIES_LIMIT = 29;<br>private static final int DEFAULT_MAX_SLEEP_MS = Integer.MAX_VALUE;<br><br>private final Random random = new Random;<br>private final long baseSleepTimeMs;<br>private final int maxRetries;<br>private final int maxSleepMs;<br><br>public ExponentialBackOffRetry(int baseSleepTimeMs, int maxRetries) {<br>this(baseSleepTimeMs, maxRetries, DEFAULT_MAX_SLEEP_MS);<br>}<br><br>public ExponentialBackOffRetry(int baseSleepTimeMs, int maxRetries, int maxSleepMs) {<br>this.maxRetries = maxRetries;<br>this.baseSleepTimeMs = baseSleepTimeMs;<br>this.maxSleepMs = maxSleepMs;<br>}<br><br>@Override<br>public boolean allowRetry(int retryCount) {<br>if (retryCount < maxRetries) {<br>return true;<br>}<br>return false;<br>}<br><br>@Override<br>public long getSleepTimeMs(int retryCount) {<br>if (retryCount < 0) {<br>throw new IllegalArgumentException("retries count must greater than 0.");<br>}<br>if (retryCount > MAX_RETRIES_LIMIT) {<br>System.out.println(String.format("maxRetries too large (%d). Pinning to %d", maxRetries, MAX_RETRIES_LIMIT));<br>retryCount = MAX_RETRIES_LIMIT;<br>}<br>long sleepMs = baseSleepTimeMs * Math.max(1, random.nextInt(1 << retryCount));<br>if (sleepMs > maxSleepMs) {<br>System.out.println(String.format("Sleep extension too large (%d). Pinning to %d", sleepMs, maxSleepMs));<br>sleepMs = maxSleepMs;<br>}<br>return sleepMs;<br>}<br>}<br>

ReconnectHandler—— 重连处理器

@ChannelHandler.Sharable<br>public class ReconnectHandler extends ChannelInboundHandlerAdapter {<br><br>private int retries = 0;<br>private RetryPolicy retryPolicy;<br><br>private TcpClient tcpClient;<br><br>public ReconnectHandler(TcpClient tcpClient) {<br>this.tcpClient = tcpClient;<br>}<br><br>@Override<br>public void channelActive(ChannelHandlerContext ctx) throws Exception {<br>System.out.println("Successfully established a connection to the server.");<br>retries = 0;<br>ctx.fireChannelActive;<br>}<br><br>@Override<br>public void channelInactive(ChannelHandlerContext ctx) throws Exception {<br>if (retries == 0) {<br>System.err.println("Lost the TCP connection with the server.");<br>ctx.close;<br>}<br><br>boolean allowRetry = getRetryPolicy.allowRetry(retries);<br>if (allowRetry) {<br><br>long sleepTimeMs = getRetryPolicy.getSleepTimeMs(retries);<br><br>System.out.println(String.format("Try to reconnect to the server after %dms. Retry count: %d.", sleepTimeMs, ++retries));<br><br>final EventLoop eventLoop = ctx.channel.eventLoop;<br>eventLoop.schedule( -> {<br>System.out.println("Reconnecting ...");<br>tcpClient.connect;<br>}, sleepTimeMs, TimeUnit.MILLISECONDS);<br>}<br>ctx.fireChannelInactive;<br>}<br><br><br>private RetryPolicy getRetryPolicy {<br>if (this.retryPolicy == ) {<br>this.retryPolicy = tcpClient.getRetryPolicy;<br>}<br>return this.retryPolicy;<br>}<br>}<br>

ClientHandlersInitializer

在之前的基础上,添加了重连处理器ReconnectHandler

public class ClientHandlersInitializer extends ChannelInitializer<SocketChannel> {<br><br>private ReconnectHandler reconnectHandler;<br>private EchoHandler echoHandler;<br><br>public ClientHandlersInitializer(TcpClient tcpClient) {<br>Assert.not(tcpClient, "TcpClient can not be .");<br>this.reconnectHandler = new ReconnectHandler(tcpClient);<br>this.echoHandler = new EchoHandler;<br>}<br><br>@Override<br>protected void initChannel(SocketChannel ch) throws Exception {<br>ChannelPipeline pipeline = ch.pipeline;<br>pipeline.addLast(this.reconnectHandler);<br>pipeline.addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4));<br>pipeline.addLast(new LengthFieldPrepender(4));<br>pipeline.addLast(new StringDecoder(CharsetUtil.UTF_8));<br>pipeline.addLast(new StringEncoder(CharsetUtil.UTF_8));<br>pipeline.addLast(new Pinger);<br>}<br>}<br>

TcpClient

在之前的基础上添加重连、重连策略的支持。

public class TcpClient {<br><br>private String host;<br>private int port;<br>private Bootstrap bootstrap;<br>/** 重连策略 */<br>private RetryPolicy retryPolicy;<br>/** 将<code>Channel</code>保存起来, 可用于在其他非handler的地方发送数据 */<br>private Channel channel;<br><br>public TcpClient(String host, int port) {<br>this(host, port, new ExponentialBackOffRetry(1000, Integer.MAX_VALUE, 60 * 1000));<br>}<br><br>public TcpClient(String host, int port, RetryPolicy retryPolicy) {<br>this.host = host;<br>this.port = port;<br>this.retryPolicy = retryPolicy;<br>init;<br>}<br><br>/**<br>* 向远程TCP服务器请求连接<br>*/<br>public void connect {<br>synchronized (bootstrap) {<br>ChannelFuture future = bootstrap.connect(host, port);<br>future.addListener(getConnectionListener);<br>this.channel = future.channel;<br>}<br>}<br><br>public RetryPolicy getRetryPolicy {<br>return retryPolicy;<br>}<br><br>private void init {<br>EventLoopGroup group = new NioEventLoopGroup;<br>// bootstrap 可重用, 只需在TcpClient实例化的时候初始化即可.<br>bootstrap = new Bootstrap;<br>bootstrap.group(group)<br>.channel(NioSocketChannel.class)<br>.handler(new ClientHandlersInitializer(TcpClient.this));<br>}<br><br>private ChannelFutureListener getConnectionListener {<br>return new ChannelFutureListener {<br>@Override<br>public void operationComplete(ChannelFuture future) throws Exception {<br>if (!future.isSuccess) {<br>future.channel.pipeline.fireChannelInactive;<br>}<br>}<br>};<br>}<br><br>public static void main(String[] args) {<br>TcpClient tcpClient = new TcpClient("localhost", 2222);<br>tcpClient.connect;<br>}<br><br>}<br>

测试

在测试之前,为了避开 Connection reset by peer异常,可以稍微修改Pingerping方法,添加if (second == 5)的条件判断。如下:

private void ping(Channel channel) {<br>int second = Math.max(1, random.nextInt(baseRandom));<br>if (second == 5) {<br>second = 6;<br>}<br>System.out.println("next heart beat will send after " + second + "s.");<br>ScheduledFuture<?> future = channel.eventLoop.schedule(new Runnable {<br>@Override<br>public void run {<br>if (channel.isActive) {<br>System.out.println("sending heart beat to the server...");<br>channel.writeAndFlush(ClientIdleStateTrigger.HEART_BEAT);<br>} else {<br>System.err.println("The connection had broken, cancel the task that will send a heart beat.");<br>channel.closeFuture;<br>throw new RuntimeException;<br>}<br>}<br>}, second, TimeUnit.SECONDS);<br><br>future.addListener(new GenericFutureListener {<br>@Override<br>public void operationComplete(Future future) throws Exception {<br>if (future.isSuccess) {<br>ping(channel);<br>}<br>}<br>});<br>}<br>

启动客户端

先只启动客户端,观察控制台输出,可以看到类似如下日志:

netty底层原理(Netty原理解析与开发实战)

断线重连测试——客户端控制台输出

可以看到,当客户端发现无法连接到服务器端,所以一直尝试重连。随着重试次数增加,重试时间间隔越大,但又不想无限增大下去,所以需要定一个阈值,比如60s。如上图所示,当下一次重试时间超过60s时,会打印Sleep extension too large(*). Pinning to 60000,单位为ms。出现这句话的意思是,计算出来的时间超过阈值(60s),所以把真正睡眠的时间重置为阈值(60s)。

启动服务器端

接着启动服务器端,然后继续观察客户端控制台输出。

netty底层原理(Netty原理解析与开发实战)

断线重连测试——服务器端启动后客户端控制台输出

可以看到,在第9次重试失败后,第10次重试之前,启动的服务器,所以第10次重连的结果为,即成功连接到服务器。接下来因为还是不定时服务器,所以出现断线重连、断线重连的循环。

扩展

在不同环境,可能会有不同的重连需求。有不同的重连需求的,只需自己实现RetryPolicy接口,然后在创建TcpClient的时候覆盖默认的重连策略即可。

netty底层原理(Netty原理解析与开发实战)

netty底层原理(Netty原理解析与开发实战)

看到这里,证明有所收获



                                                                                
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发表评论

登录后才能评论