Table of Contents
Grizzly 2.0 introduces a new subsystem to improve memory management within the runtime. This subsystem is comprised of three main artifacts:
Buffers
Thread local memory pools
MemoryManager as a factory of sorts using the buffers and thread local pools
whose primary purpose is to speed up memory allocation and, when possible, provide for memory re-use.
The following sections will describe these concepts in detail.
The MemoryManager is the main interface for allocating/deallocating Buffer instances:
public interface MemoryManager<E extends Buffer> extends JmxMonitoringAware<MemoryProbe> { /** * Allocates a {@link Buffer} of the required size. * * @param size {@link Buffer} size to be allocated. * @return allocated {@link Buffer}. */ public E allocate(int size); /** * Allocates {@link Buffer} at least of the provided size. * This could be useful for usecases like Socket.read(...), where * we're not sure how many bytes are available, but want to read as * much as possible. * * @param size the min {@link Buffer} size to be allocated. * @return allocated {@link Buffer}. */ public E allocateAtLeast(int size); /** * Reallocate {@link Buffer} to a required size. * Implementation may choose the way, how reallocation could be done, either * by allocating new {@link Buffer} of required size and copying old * {@link Buffer} content there, or perform more complex logic related to * memory pooling etc. * * @param oldBuffer old {@link Buffer} to be reallocated. * @param newSize new {@link Buffer} required size. * @return reallocated {@link Buffer}. */ public E reallocate(E oldBuffer, int newSize); /** * Release {@link Buffer}. * Implementation may ignore releasing and let JVM Garbage collector to take * care about the {@link Buffer}, or return {@link Buffer} to pool, in case * of more complex <tt>MemoryManager</tt> implementation. * * @param buffer {@link Buffer} to be released. */ public void release(E buffer);
There is typically a single MemoryManager servicing all transports defined within the Grizzly runtime. This MemoryManager can be obtained by calling:
TransportFactory.getInstance().getDefaultMemoryManager();
Conversely, custom MemoryManager implementations may be made available to the application by calling:
TransportFactory.getInstance().setDefaultMemoryManager(MemoryManager defaultMemoryManager);
The MemoryManager instance exposed by the TransportFactory can be obtained without calling the TransportFactory directly as it will be available on the NIOTransport being used by the application.
Grizzly 2.2.10 includes two MemoryManager implementations: HeapMemoryManager and ByteBufferManager. By default, the Grizzly runtime will use the HeapMemoryManager, however if a Grizzly application requires Direct ByteBuffer access, then the ByteBufferManager can be used.
The ByteBufferManager implementation vends Buffers that wrap JDK ByteBuffer instances.
Developers may wonder why Grizzly have a HeapMemoryManager when the ByteBufferManager can be used with both direct and heap ByteBuffers. The main reason is cheaper allocation performance as well as making operations such a trim() and shrink() (covered later in this section) cheap as well.
ThreadLocal memory pools provide the ability to allocate memory without any synchronization costs. Both the ByteBufferManager and HeapMemoryManager use such pools. Note that it's not required that a custom MemoryManager use such pools, however, if said MemoryManager implements the ThreadLocalPoolProvider interface, then a ThreadLocalPool implementation must be provided. The ThreadLocalPool implementation will be created and passed to each thread being maintained by Grizzly's managed threads.
Grizzly 2.2.10 provides several buffers for developers to leverage when creating applications. These Buffer implementations offer features not available when using the JDK's ByteBuffer.
The Buffer is essentially the analogue to the JDK's ByteBuffer. It offers the same set of methods for:
Pushing/pulling data to/from the Buffer.
Methods for accessing or manipulating the Buffer's position, limit, and capacity.
In addition to offering familar semantics to ByteBuffer, the following features are available:
Splitting, trimming, and shrinking.
Prepending another Buffer's content to the current Buffer.
Converting the Buffer to a ByteBuffer or ByteBuffer[].
Converting Buffer content to a String.
Please see the javadocs for further details on Buffer.
The CompositeBuffer is another Buffer implementation which allows appending of Buffer instances. The CompositeBuffer maintains a virtual position, limit, and capacity based on the Buffers that have been appended and can be treated as a simple Buffer instance.
Please see the javadocs for further details on CompositeBuffer.