Grizzly-Core Framework

更新时间:2023-03-08 17:24:53 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

一、Core Framework

1.1Memory Management

Memory Management Overview

Grizzly 2.0 引入了一个新的子系统来改进运行时的内存管理。该子系统包含3部分主要内容: ? Buffers

? ThreadLocal memory pools

? MemoryManager as a factory of sorts using the buffers and thread local pools

其主要目标是加速内存分配,以及如果可能,提供内存复用。接下来将详细讲述这些概念。

MemoryManager

MemoryManager是分配和解除分配 Buffer 实例的主要接口:

publicinterfaceMemoryManager extendsJmxMonitoringAware { /**

* Allocated {@link Buffer} of the required size. *

* @param size {@link Buffer} size to be allocated. * @return allocated {@link Buffer}. */

public E allocate(int size); /**

* Allocated {@link Buffer} at least of the provided size.

* This could be useful for use cases like Socket.read(...), where * we're not sure how many bytes are available, but want to read as * much as possible. *

* @param size the min {@link Buffer} size to be allocated. * @return allocated {@link Buffer}. */

public E allocateAtLeast(int size); /**

* Reallocate {@link Buffer} to a required size.

* Implementation may choose the way, how reallocation could be done, either

* by allocating new {@link Buffer} of required size and copying old * {@link Buffer} content there, or perform more complex logic related to

* memory pooling etc. *

* @paramoldBuffer old {@link Buffer} to be reallocated. * @paramnewSize new {@link Buffer} required size. * @return reallocated {@link Buffer}. */

public E reallocate(E oldBuffer, intnewSize); /**

* Release {@link Buffer}.

* Implementation may ignore releasing and let JVM Garbage collector to take

* care about the {@link Buffer}, or return {@link Buffer} to pool, in case

* of more complex MemoryManager implementation. *

* @param buffer {@link Buffer} to be released. */

publicvoid release(E b;uffer);

/**

* Return true if next {@link #allocate(int)} or {@link #allocateAtLeast(int)} call,

* made in the current thread for the given memory size, going to return a {@link Buffer} based

* on direct {@linkjava.nio.ByteBuffer}, or false otherwise. *

* @param size * @return */

publicbooleanwillAllocateDirect(int size); }

在Grizzly运行时定义了一个典型的MemoryManager,用来服务于所有传输。通过引用MemoryManager接口的静态成员变量MemoryManager可以得到。

MemoryManager.DEFAULT_MEMORY_MANAGER

反之,也可以自定义MemoryManager的实现来作为默认的MemoryManager,通过指定系统属性org.glassfish.grizzly.DEFAULT_MEMORY_MANAGER为自定义MemoryManager实现。 注意:该实现必须包含一个public的无参构造函数,以便runtime to properly set the new default。 Grizzly 2.3 自带两个MemoryManager实现:HeapMemoryManager和ByteBufferManager。 默认情况下,Grizzly运行时使用HeapMemoryManager,如果Grizzly应用需要直接访问ByteBuffer,就可以使用ByteBufferManager。

ByteBufferManager

The ByteBufferManager implementation vends Grizzly Buffer instances that wrap JDK ByteBuffer instances. This is the MemoryManager to use if the Grizzly application requires direct ByteBuffer usage.

It should be noted that this MemoryManager, during our benchmarking, showed to have a little more overhead when using typical heap buffers. As such, if direct memory isn’t needed, we recommend that the default HeapMemoryManager be used.(如果不需要直接内存,建议使用HeapMemoryManager)

HeapMemoryManager

The HeapMemoryManager is the default MemoryManager. Instead of wrapping ByteBuffer instances, this MemoryManager will allocate Buffer instances that wrap byte arrays directly. This

MemoryManager offers better performance characteristics for operations such as trimming or splitting.

ThreadLocal Memory Pools

ThreadLocal内存池在分配内存时,没有任何同步成本。ByteBufferManager和HeapMemoryManager都是使用的这个池。 注意:自定义的MemoryManager可以不用这个池,但是如果说MemoryManager实现了ThreadLocalPoolProvider接口,就必须提供ThreadLocalPool的实现。ThreadLocalPool的实现会被创建和传递给Grizzly管理的每个线程。

Memory Manager and ThreadLocal Memory Pools Working Together

下图是带有TreadLocalPool的MemoryManager管理内存分配的工作图:

Buffers

Grizzly 2.3 提供了多个Buffers供开发者创建应用时使用。这些Buffer实现提供了很多

JDK ByteBuffer不具有的特性。

Buffer

The Buffer is essentially the analogue to the JDK’s ByteBuffer. It offers the same set of methods for:

? Pushing/pulling data to/from the Buffer.

?

Methods for accessing or manipulating the Buffer’s position, limit, and capacity.

In addition to offering familiar semantics to ByteBuffer, the following features are available: ? Splitting, trimming, and shrinking.

? ? ?

Prepending another Buffer’s content to the current Buffer. Converting the Buffer to a ByteBuffer or ByteBuffer[]. Converting Buffer content to a String.

Please see the javadocs for further details on Buffer.

CompositeBuffer

The CompositeBuffer is another Buffer implementation which allows appending

of Buffer instances. The CompositeBuffer maintains a virtual position, limit, and capacity based on the Buffers that have been appended and can be treated as a simple Buffer instance. Please see the javadocs for further details on CompositeBuffer.

1.2 I/O Strategies

IOStrategies

使用NIO时,有个很自然的问题,发生在NIO channel上的特定NIO event是如何处理的。通常有两种选项:在当前线程(selector)中处理NIO事件,或者传到worker线程去处理。

1.Worker-thread IOStrategy.

最有用的IO策略,Selector线程把NIO事件委托给worker线程去处理

这种IO策略的扩展性非常好,并且安全。我们可以根据需要改变selector的大小,以及worker线程池,并且在处理特定NIO事件时,如果发生了某些问题,也不会影响到注册在相同Selector上的其它Channels。

2.Same-thread IOStrategy.

Potentially最高效的IO策略。不同于worker-thread IO 策略,该策略在当前线程中处理NIO事件,可以避免线程上下文切换的巨大成本(expensive[1])。 [1] someOSes do a great job optimizing the thread context switches, but we still suppose it as relatively expensive operation.

这种策略仍然是可扩展的,因为可以调整selector线程池的大小,但是它有缺点。需要注意的是,channel NIO事件处理不会阻塞,或者执行任何的长连接操作。因为这会阻塞发生在相同Selector上的其它NIO事件。

3.Dynamic IOStrategy.

前边两种策略都各有优点和缺点。但是如果有一种策略,在运行时,可以根据当前状态(load, gathered statistics…etc),在这两种策略间切换呢?

这种策略可能会有很多优点,例如允许对资源的细粒度控制。但是不要让条件评估逻辑超负载,否则它的复杂度会使得这种策略比前两种策略更低效。

4.Leader-follower IOStrategy.

The last IOStrategy included with Grizzly 2.3, is the leader-follower strategy:

这种IO策略类似于worker-thread IO策略,但不是将NIO事件传给worker线程,而是将控制传递给worker线程,然后worker线程变成selector线程,原selector线程负责处理NIO事件。

Grizzly 2.3 provides general interface org.glassfish.grizzly.IOStrategy:

publicinterfaceIOStrategyextendsWorkerThreadPoolConfigProducer { /**

* The {@linkorg.glassfish.grizzly.nio.SelectorRunner} will invoke this

* method to allow the strategy implementation to decide how the * {@linkIOEvent} will be handled. *

* @param connection the {@link Connection} upon which the provided * {@linkIOEvent} occurred.

* @paramioEvent the {@linkIOEvent} that triggered execution of this * strategy *

* @returntrue, if this thread should keep processing IOEvents on

* the current and other Connections, or false if this thread * should hand-off the farther IOEvent processing on any Connections, * which means IOStrategy is becoming responsible for continuing IOEvent * processing (possibly starting new thread, which will handle IOEvents). *

* @throwsIOException if an error occurs processing the {@linkIOEvent}. */

booleanexecuteIoEvent(final Connection connection, finalIOEventioEvent) throwsIOException; /**

* The {@linkorg.glassfish.grizzly.nio.SelectorRunner} will invoke this * method to allow the strategy implementation to decide how the * {@linkIOEvent} will be handled. *

* @param connection the {@link Connection} upon which the provided * {@linkIOEvent} occurred.

* @paramioEvent the {@linkIOEvent} that triggered execution of this * strategy

* @paramisIoEventEnabledtrue if IOEvent is still enabled on the

* {@link Connection}, or false if IOEvent was preliminary disabled * orIOEvent is being simulated. *

* @returntrue, if this thread should keep processing IOEvents on

* the current and other Connections, or false if this thread * should hand-off the farther IOEvent processing on any Connections, * which means IOStrategy is becoming responsible for continuing IOEvent * processing (possibly starting new thread, which will handle IOEvents). *

* @throwsIOException if an error occurs processing the {@linkIOEvent}. */

booleanexecuteIoEvent(final Connection connection, finalIOEventioEvent, finalbooleanisIoEventEnabled) throwsIOException; }

And the IOStrategy implementation may decide what to do with the specific NIO event processing.

Grizzly 2.3 带有四个预定义的IO策略实现,如上所述: 1. org.glassfish.grizzly.strategies.WorkerThreadIOStrategy 2. org.glassfish.grizzly.strategies.SameThreadIOStrategy 3. org.glassfish.grizzly.strategies.SimpleDynamicThreadStrategy 4. org.glassfish.grizzly.strategies.LeaderFollowerIOStrategy

每个Transport分配一个策略,因此可以通过Transport的get/setIO策略方法得到/设置IO策略。TCP和UDP传输默认使用worker-thread IO策略。

1.3 Transports and Connections

Transport and Connection represent the core network API in Grizzly 2.3.

Transport定义网络传输,例如TCP或者UDP,包含相关的资源(线程池,内存管理等)以及嵌

套连接的默认配置。

在Grizzly 2.3中,Connection代表一个单一网络连接(和socket很像),根据传输的类型,可以是TCP连接,或UDP连接,或任意自定义连接类型。 Transport和Connection是1对多的关系:

? Transports

上边是完整的类图,Grizzly 2.3有两个Transport实现:NIO TCP和NIO UDP,都是基于Java NIO,未来将会支持基于NIO 2的transports,如果安装了JDK 7,SCTP会被添加进来。 接下来将简要介绍下上图中的组件:

?

MemoryManager implements the memory allocation/release logic (see the Memory

management section);

? ?

ExecutorService represents the Transport associated thread-pool

Processor transport default Processor logic, which is responsible for processing NIO events.

Processors are assigned per Connection, so this one will be taken as the default one, but each Connection may have own customized Processor

? ?

Strategy implements IOStrategy (see I/O Strategies section)

NIOChannelDistributoris responsible for distributing newly created Connections among

Transport NIO Selector threads

?

SelectorRunnerimplements single Selector thread logic, which selects ready NIO events

and passes them to the Processor, which will process it according to the Strategy logic

?

SelectionKeyHandleris a wrapper over SelectionKey operations, also:

- maps Java NIO operations (OP_READ, OP_WRITE etc) mapping to Grizzly IOEvent

- maps Java NIO SelectionKey to Grizzly Connection ?

SelectorHandlerimplements thread-safe logic for registering/de-registering

SelectableChannels on Selectors, changing SelectionKey interests, executing custom tasks in Selector threads (to avoid unexpected locks)

? Connections

在上图中,可以看出Connections是怎样和Transports发生关系的,以及基础的Connection抽象。在Grizzly 2.3中有TCP和UDP连接实现,除此之外,we have special server-Connections for both TCP and UDP.

*TCPNIOServerConnection的工作方式同TCP ServerSocket,它会绑定到特定的TCP

host:port,监听来自客户端的TCP连接。当有新的TCP连接接入就绪,TCPNIOServerConnection接入该连接,根据Transport设置,配置一个新的连接。

*UDPNIOServerConnection代表一个UDP连接,它不绑定到任何特定的地址,因此可以接收所有的发送到该地址的UDP包。换句话说,UDPNIOConnection和UDPNIOServerConnection的唯一区别就是,后者不会绑定到任何地址。

*Processor.逻辑部分,负责处理Connection NIO事件(ACCEPT, CONNECT, READ, WRITE, CLOSE)

*SelectionKey. The Connection underlying java.nio.channels.SelectionKey, which defines the NIO SelectableChannel<-> Selector registration.

? Establishing client connections

ConnectorHandler API负责建立和初始化客户端连接。

如上图所示,TCPNIOTransport和UDPNIOTransport实现了SocketConnectionHandler接口,因此可以直接使用Transport实例创建新的客户端连接:

在这个例子中,新创建的客户端连接源自Transport’s Processor。也可以创建自定义的ConnectorHandler(UDP和TCP传输类似):

下边两序列图将说明异步连接操作:

第一个图中,异步连接传到当前线程,然后调用ConnectorHandler.connect(…)操作,返回get Future作为结果。

上边代码中,只是将connection对象添加到了SelectorRunner的队列中,然后在selector-thread中作处理。在selector-thread中,连接实际是注册在了SelectorRunner关联的

Selector上,一旦注册成功,SelectorRunner通知ConnectorHandler注册完成,然后进行一下操作:

* for UDP transport—UDPNIOConnection被初始化。Processor接到CONNECT操作完成的通知,然后Future在custom-thread中变成可用。

* forTCP transport–需要等待底层操作系统网络框架通知框架,Connection已经连接到目标地址(UDP没有这个),这有这时Processor才接到CONNECT操作完成的通知,然后Future在custom-thread中变成可用。 总体上的逻辑是这样的:

对于TCP传输,在服务器端TCPNIOServerConnection接受新的客户端连接。这时处理逻辑就很像上图,但不是”CONNECT”,而是将发生在TCPNIOConnection上的”ACCEPT”事件通知Processor。

1.4 FilterChain and Filters

FilterChains and Filters

上一章讨论了Processor,它的功能是处理Grizzly*Connection*s上发生的I/O事件。FilterChain是在Grizzly中使用的最有用的Processor。

FiterChain,顾名思义,是一个*Filter*s的处理链。每个Filter是一个待执行的processing work unit,其目的是检查 and/or 修改FilterChainContext代表的事物的状态。

To give an idea how FilterChain may look like, here is example of FilterChain, which implements HTTP server logic:

* TransportFilter负责从网络连接读取数据到缓冲区,以及将缓冲区的数据写回网络连

接。

* HttpFilter负责Buffer<-->HttpPacket转换(双向)

*HttpServerFilter负责处理请求*HttpPacket*s和生成相应*HttpPacket*s,and send them

back on FilterChain in opposite direction (HttpServerFilter->HttpFilter->TransportFilter). So, what if we want to implement HTTPS server? It’s simple:

添加一个SSLFilter,负责编码/解码SSL secured data。

正如我们所见,在任意的I/O事件处理期间,FilterChain链中的Filters会依次执行。重点:大多数I/O事件处理都是从第一个Filter开始,到最后一个Filter结束(上图中从左至右),除了WRITE事件,它是从最后一个Filter开始,到第一Filter结束(从右向左) 为了更清楚描述问题,下边将定义一些术语:

* Upstream–从链中的某个Filter到最后一个Filter(上图中从左到右) * DownStream–从链中的某个Filter到第一个Filter(上图中从右到左)

让我们看下FilterChain可以处理哪些I/O事件,为此我们可以看下Filter接口的方法:

所以I/O事件有:

* READ:Connection中数据已就绪,等待读取和处理

* WRITE:准确写数据到Connection,Filter可负责转换数据表现形式,例如上图

中HttpPacket->Buffer

* CONNECT:新的客户端Connection已连接 * ACCEPT(TCP only):新的客户端Connection已被服务器

Connection(TCPNIOServerConnection)接受。

* CLOSE:连接已关闭(either locally or by peer) 重点:特定Connection上的相同事件是连续地执行的。例如,如果我们在Connection “A”上处理READ I/O事件,Grizzly将不会在Connection “A”上开始其它READI/O事件,直到之前的READ I/O事件处理完毕。如果用户决定管理I/O事件处理,那么让然需要关注连续事件处理的“规则”。

此外,FilterChain的Filters还可以初始化和处理自定义事件通知。The event initiator may choose to emit the event upstream or downstream by FilterChain like:

FilterChain中的Filters可以通过实现handleEvent方法来拦截和处理自定义事件:

如我们所见,每个Filter “handle”方法都有一个FilterChainContext参数,并且返回一个NextAction结果。

FilterChainContext

FilterChainContext代表一个context(state),与特定连接上的处理的特定I/O事件有关,因此其生命周期是绑定到单个I/O事件处理上的。 FilterChainContext包含以下状态信息: Connection

Connection I/O事件发生的地方 Address

The peer address。大多数情况下返回相同的值,Connection.getPeerAddress(),除了在未绑定的UDP Connection上处理READ事件时。此时,FilterChainContext.getAddress将返回发送数据的peer的地址。 Message

被处理的消息。这是在I/O事件处理期间Filters唯一可以改变的值。Usually it is used during the incoming/outgoing message parsing/serializing. 每个Filter都可以将初始的消息数据转换为不同的表现形式,设置回去,然后传递到下一个Filter进行处理。

例如,当处理READ事件时,HttpFilter从FilterChainContext得到Grizzly Buffer形式的消息数据,转换为HttpPacket,将HttpPacket设置回FilterChainContext消息中,然后将控制传递给HttpServerFilter,HttpServerFilter将从FilterChainContext得到HttpPacket,并进行处理。

除了保持状态之外,FilterChainContext还支持常用的I/O操作: Read

ReadResultreadResult = ctx.read();

This operation performs a blocking FilterChain read starting at the first Filter in chain (包含) upstream to this Filter (不含).从第一个Filter到当前Filter的阻塞读。

当READ I/O事件处理到达本Filter,FilterChain将调用本Filter的handleRead(…)方法,然后返回一个结果。 Write

ctx.write(message);

or

ctx.write(message, completionHandler);

or

ctx.write(address, message, completionHandler); // Unbound UDP only

This operation performs a non-blocking FilterChain write starting at this Filter (不含) downstream to the first Filter (包含). 从本Filter到第一个Filter的非阻塞写。

This operation initiates processing of WRITE I/O event on the FilterChain starting from this Filter (exclusive). Flush

ctx.flush();

or

ctx.flush(completionHandler);

This operation initializes and notifies downstream filters about special TransportFilter.

FlushEvent so each Filter is able to handle this event and make sure all the cached data was written on the Connection. Event notification

ctx.notifyUpstream(event);

or

ctx.notifyDownstream(event);

This operation notifies all the upstream/downstream Filters in the FilterChain about specific FilterChainEvent.

NextAction

如上所述,在一个I/O事件处理期间,FilterChain依次调用Filters,从第一个到最后一个,WRITE事件除外。WRITE事件是从最后一个到第一个。同时Filters可以改变默认的I/O 事件处理顺序,通过返回不同的NextAction类型。 StopAction

returnctx.getStopAction();

告知FilterChain停止处理当前I/O事件。通常当没有足够数据继续进行FilterChain处理时返回StopAction,或者是最后一个Filter。 StopAction可以加参数:

returnctx.getStopAction(incompleteChunk);

or

returnctx.getStopAction(incompleteChunk, appender);

StopAction中的incompleteChunk意味着没有足够的数据继续FilterChain处理。As more data becomes available but before FilterChain calls the Filter, it will check if the Filter has any data stored after the last invocation. If an incompleteChunk is present it will append the new data to the stored one and pass the result as the FilterChainContext message.

注意: theincompleteChunk should be “appendable”, so the FilterChain will know how new data chunk should be appended to the stored one. So the incompleteChunk should either implement org.glassfish.grizzly.Appendable or org.glassfish.grizzly.Appender should be passed as additional parameter. InvokeAction

returnctx.getInvokeAction();

告知FilterChain执行下一个Filter,按照自然执行顺序。

也可以创建带有incompleteChunk 参数的InvokeAction:

returnctx.getInvokeAction(incompleteChunk, appender);

this instructs the FilterChain to store the incompleteChunk and continue FilterChain execution like it was with non-parameterized version.

该特性主要适用这些场景:从source Buffer中解析出来的一个或多个消息,然后发现有一个remainer,没有足够的数据转换为应用消息。因此开发者可以带着这些解析后的消息继续FilterChain处理,保存incompleteChunk remainder。As more data becomes available but before FilterChain calls the Filter again, it will check if the Filter has any data stored after the last invocation. If an incompleteChunk is present it will append the new data to the stored one and pass the result as the FilterChainContext message.

注意: theincompleteChunk should be “appendable”, so the FilterChain will know how new data chunk should be appended to the stored one. So the incompleteChunk should either implement org.glassfish.grizzly.Appendable or org.glassfish.grizzly.Appender should be passed as additional parameter.

另一个选择是创建一个带有unparsedChunk 参数的InvokeAction: returnctx.getInvokeAction(unparsedChunk);

告知FilterChain保存unparsedChunk,然后像不带参数的版本那样继续FilterChain执行。与“incompleteChunk”不同的是,这次我们不知道unparsedChunk是否有足够的数据,以转换成应用消息。一旦FilterChain执行结束,the unparsedChunk of the most recent Filter in chain will be restored FilterChain processing will be re-initialed immediately starting from the Filter which stored the unparsedChunk.

该特性主要适用这些场景:从source Buffer中解析出来的消息,然后发现Buffer有一个

remainer,这可能包含/不包含更多消息。开发者可以提取第一个消息,保存remainder,等当前消息处理完毕,再进行处理 RerunFilterAction

returnctx.getRerunFilterAction();

告知FilterChain重新运行一次本Filter。这对简化I/O事件处理编码和避免递归非常有用。 SuspendAction

returnctx.getSuspendAction();

告知FilterChain挂起(离开)当前线程中的I/O事件处理。可以通过调用以下方法继续I/O事

件处理:

* ctx.resume():在挂起的Filter中继续执行

* ctx.resume(NextAction):在挂起的Filter中继续执行,但不是把控制给该

Filter, - it simulates the Filter processing completion like if it returned NextAction as the result

* ctx.resumeNext():在挂起的下一个Filter中继续执行。Same as

ctx.resume(ctx.getInvokeAction()).

注意:在返回SuspendAction之后,I/O事件处理继续之前,Grizzly不会再同一个Connection上开始处理相同的I/O事件。例如如果在READ 事件处理期间返回SuspendAction–在同一个Connection上进来任何数据,Grizzly都将不会通知FilterChain,直到挂起事件的READ事件完成。

ForkAction (was SuspendStopAction) returnctx.getForkAction();

跟SuspendAction很像,只有一点不同。在得到ForkAction之后,Grizzly会保持监听连接上的相同I/O事件,一旦发生就将通知FilterChain。 注意:ensure that two or more threads are not processing the same I/O operation simultaneously.确保多个线程不会同时处理相同的I/O操作。

1.5 Core Configuration

1.6 Port Unification

1.7 Monitoring

1.8 Best Practices

1.9 Quick Start

1.10 Samples

二、HTTP Components

2.1 Core HTTP Framework

2.2 HTTP Server Framework

2.3 HTTP Server Framework Extras

2.4 Comet

2.5 JAXWS

2.6 WebSockets

2.7 AJP

2.8 SPDY

本文来源:https://www.bwwdw.com/article/ce56.html

Top