IO模型
IO多路復用模式:Reactor、Proactor
NIO實現(xiàn)的是Reactor模式。通過select、epoll函數(shù),用戶可以一個線程同時處理多個Channel的IO請求。當數(shù)據(jù)就緒,再通過實際的用戶線程進行數(shù)據(jù)拷貝,邏輯處理
- 注冊讀事件及其對應(yīng)的事件處理器
- 事件分離器(select\epoll)等待事件
- 事件到來,分離器調(diào)用相應(yīng)的處理器
- 事件處理器完成讀操作,處理數(shù)據(jù)
AIO實現(xiàn)的是Proactor模式。由操作系統(tǒng)內(nèi)核負責IO數(shù)據(jù)讀寫,然后回調(diào)函數(shù)進行邏輯處理
- 事件處理器發(fā)起讀請求
- 事件分離器等待讀事件完成
- 在分離器等待過程中,操作系統(tǒng)利用并行的內(nèi)核線程執(zhí)行實際的讀操作,并將結(jié)果數(shù)據(jù)存入用戶自定義緩沖區(qū),最后通知事件分離器讀操作完成
- 事件分離器通知事件處理器,讀操作已完成
- 事件處理器處理緩沖區(qū)數(shù)據(jù)
兩者主要區(qū)別:用戶線程或是操作系統(tǒng)內(nèi)核線程進行IO數(shù)據(jù)讀寫
引入Netty
Netty中使用的Reactor模式,引入了多Reactor(1個select線程+N個IO線程+M個worker線程)。即一個主Reactor負責監(jiān)控所有的連接請求,多個子Reactor負責監(jiān)控并處理讀/寫請求,減輕了主Reactor的壓力,降低了主Reactor壓力太大而造成的延遲。
并且每個子Reactor分別屬于一個獨立的線程,每個成功連接后Channel的所有操作由同一個線程處理。這樣保證了同一請求的所有狀態(tài)和上下文在同一個線程中,避免了不必要的上下文切換,同時也方便了監(jiān)控請求響應(yīng)狀態(tài)。

<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.22.Final</version>
</dependency>
Netty服務(wù)端\客戶端都需要以下兩部分
- 至少一個ChannelHandler: 該組件實現(xiàn)了接收的數(shù)據(jù)處理,即消息的業(yè)務(wù)邏輯
- 引導Bootstrap: 服務(wù)器\客戶端啟動配置。比如監(jiān)聽端口、IO處理線程數(shù)、Channel處理邏輯
編寫Echo服務(wù)端
事件處理器 ChannelHandler
@Slf4j
@ChannelHandler.Sharable
public class EchoServerHandler extends ChannelInboundHandlerAdapter {
private final ChannelGroup channels = new DefaultChannelGroup("Echo-Server", GlobalEventExecutor.INSTANCE);
/**
* 客戶端連接到服務(wù)端
*
* @param ctx
* @throws Exception
*/
@Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
log.info("channel id: {}", ctx.channel().id().asLongText());
// 廣播消息給所有channels
channels.writeAndFlush("client: " + ctx.channel().remoteAddress() + " add");
channels.add(ctx.channel());
}
/**
* 客戶端斷開連接
*
* @param ctx
* @throws Exception
*/
@Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
channels.writeAndFlush("client: " + ctx.channel().remoteAddress() + " remove");
channels.remove(ctx.channel());
}
/**
* Channel處于活動狀態(tài),已經(jīng)連接到遠程節(jié)點。在線!
*
* @param ctx
* @throws Exception
*/
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
log.info("{} active", ctx.channel().remoteAddress());
}
/**
* Channel未連接到遠程節(jié)點。掉線!
*
* @param ctx
* @throws Exception
*/
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
log.warn("{} inactive", ctx.channel().remoteAddress());
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf in = (ByteBuf) msg;
log.info("Server received: {}", in.toString(StandardCharsets.UTF_8));
ctx.writeAndFlush(in); // write是把數(shù)據(jù)寫入到OutboundBuffer(不真正發(fā)送數(shù)據(jù)),flush是真正的發(fā)送數(shù)據(jù)
}
@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
log.info("Server read complete.");
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
log.error("Server error.");
ctx.close();
}
}
服務(wù)端繼承ChannelInboundHandlerAdapter類,一般只需要實現(xiàn)channelRead()、exceptionCaught()方法即可
引導 Bootstrap
public class EchoServer {
private final int port;
public EchoServer(int port) {
this.port = port;
}
public static void main(String[] args) {
new EchoServer(9002).start();
}
public void start() {
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new EchoServerHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
try {
// 阻塞綁定port,直到成功
ChannelFuture future = bootstrap.bind(port).sync();
// 阻塞等待,直到服務(wù)器的Channel關(guān)閉
future.channel().closeFuture().sync();
} catch (Exception ignore) {
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
}
bossGroup線程監(jiān)聽Channel,只需要一個,多了沒用;workerGroup負責IO讀寫
編寫Echo客戶端
事件處理器 ChannelHandler
@Slf4j
@ChannelHandler.Sharable
public class EchoClientHandler extends SimpleChannelInboundHandler<ByteBuf> {
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
log.info("Channel active.");
ctx.writeAndFlush(Unpooled.copiedBuffer("Netty rocks!", StandardCharsets.UTF_8));
}
@Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception {
log.info("Client received: {}", msg.toString(StandardCharsets.UTF_8));
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
log.error("Client error.");
ctx.close();
}
}
客戶端繼承SimpleChannelInboundHandler類,此類繼承ChannelInboundHandlerAdapter并實現(xiàn)了channelRead()方法,業(yè)務(wù)handler覆寫channelRead0()方法
引導 Bootstrap
public class EchoClient {
private final String host;
private final int port;
public EchoClient(String host, int port) {
this.host = host;
this.port = port;
}
public static void main(String[] args) {
new EchoClient("localhost", 9002).start();
}
public void start() {
EventLoopGroup workerGroup = new NioEventLoopGroup();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(workerGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new EchoClientHandler());
}
});
try {
ChannelFuture future = bootstrap.connect(host, port).sync();
future.channel().closeFuture().sync();
} catch (Exception ignore) {
} finally {
workerGroup.shutdownGracefully();
}
}
}