private support for your internal/customer projects ... custom extensions and distributions ... versioned snapshots for indefinite support ... scalability guidance for your apps and Ajax/Comet projects ... development services from 1 day to full product delivery
If you have memory leaks, and you have thoroughly investigated tools like jconsole, yourkit, jprofiler, jvisualvm or any of the other profiling and analysis tools, and you can eliminate your code as the source of the problem, read the following sections about how to prevent memory leaks in your application.
This feature is available for Jetty 7.6.6 and later.
Code that keeps references to a webapp classloader can cause memory leaks. These leaks fall generally into two categories: static fields and daemon threads.
A static field is initialized with the value of the classloader, which happens to be a webapp classloader; as Jetty undeploys and redeploys the webapp, the static reference lives on, meaning garbage collecting cannot occur for the webapp classloader.
When Jetty starts as a daemon thread and is outside the lifecycle of the webapp, threads have references to the context classloader that created them, leading to a memory leak if that classloader belongs to a webapp. For a good discussion of the issue see Anatomy of a PermGen Memory Leak.
We provide a number of workaround classes that preemptively invoke the problematic code with the Jetty classloader, thereby ensuring the webapp classloader is not pinned. Be aware that since some of the problematic code creates threads, you should be selective about which preventers you enable, and use only those that are specific to your application.
Jetty includes the following preventers.
Preventer Name | Problem Addressed |
---|---|
AppContextLeakPreventer | The call to AppContext.getAppContext() keeps
a static reference to the context classloader. The JRE can
invoke AppContext in many different places. |
AWTLeakPreventer | The java.awt.Toolkit class has a static
field that is the default toolkit. Creating the default toolkit
causes the creation of an EventQueue , which has a
classloader field initialized with the thread context class
loader. See JBoss bug
AS7-3733. |
DOMLeakPreventer | DOM parsing can cause the webapp classloader to be
pinned, due to the static field RuntimeException
of
com.sun.org.apache.xerces.internal.parsers.AbstractDOMParser.
Oracle
bug 6916498 specifically mentions that a heap dump might
not identify the GCRoot as the uncollected loader, making it
difficult to identify the cause of the leak. |
DriverManagerLeakPreventer | The number of threads dedicated to accepting incoming connections. |
GCThreadLeakPreventer | Calls to sun.misc.GC.requestLatency create a
daemon thread that keeps a reference to the context classloader.
A known caller of this method is the RMI impl. See Stackoverflow:
Does java garbage collection log entry 'Full GC system' mean
some class called System.gc()? |
Java2DLeakPreventer | sun.java2d.Disposer keeps a reference to the
classloader. See ASF
bug 51687. |
LDAPLeakPreventer | If com.sun.jndi.LdapPoolManager class is
loaded and the system property
com.sun.jndi.ldap.connect.pool.timeout is set to a
nonzero value, a daemon thread starts and keeps a reference to
the context classloader. |
LoginConfigurationLeakPreventer | The javax.security.auth.login.Configuration
class keeps a static reference to the thread context
classloader. |
SecurityProviderLeakPreventer | Some security providers, such as
sun.security.pkcs11.SunPKCS11 start a deamon thread
that traps the thread context classloader. |
You can individually enable each preventer by adding an instance
to a Server with the addBean(Object)
call. Here's an
example of how to do it in code with the
org.eclipse.jetty.util.preventers.AppContextLeakPreventer
:
Server server = new Server(); server.addBean(new AppContextLeakPreventer());
You can add the equivalent in code to the
$JETTY_HOME/etc/jetty.xml
file or any jetty xml file that
is configuring a Server instance. Be aware that if you have more than
one Server instance in your JVM, you should configure these preventers
on just one of them. Here's the example from code
put into xml:
<Configure id="Server" class="org.eclipse.jetty.server.Server"> <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.util.preventers.AppContextLeakPreventer"/> </Arg> </Call> </Configure>
The JSP engine in Jetty is Jasper. This was originally developed under the Apache Tomcat project, but over time many different project have forked it. All Jetty versions up to 6 used Apache-based Jasper exclusively, with Jetty 6 using Apache Jasper only for JSP 2.0. With the advent of JSP 2.1, Jetty 6 switched to using Jasper from Sun's Glassfish project, which is now the reference implementation.
All forks of Jasper suffer from a problem whereby using JSP tag files puts the permgen space under pressure. This is because of the classloading architecture of the JSP implementation. Each JSP file is effectively compiled and its class loaded in its own classloader to allow for hot replacement. Each JSP that contains references to a tag file compiles the tag if necessary and then loads it using its own classloader. If you have many JSPs that refer to the same tag file, the tag's class is loaded over and over again into permgen space, once for each JSP. See Glassfish bug 3963 and Apache bug 43878. The Apache Tomcat project has already closed this bug with status WON'T FIX, however the Glassfish folks still have the bug open and have scheduled it to be fixed. When the fix becomes available, the Jetty project will pick it up and incorporate into our release program.
This section describes garbage collection and direct ByteBuffer problems.
One symptom of a cluster of JVM related memory issues is the OOM
exception accompanied by a message such as
java.lang.OutOfMemoryError: requested xxxx bytes for xxx. Out of
swap space?
Oracle bug 4697804 describes how this can happen in the scenario when the garbage collector needs to allocate a bit more space during its run and tries to resize the heap, but fails because the machine is out of swap space. One suggested work around is to ensure that the JVM never tries to resize the heap, by setting min heap size to max heap size:
java -Xmx1024m -Xms1024m
Another workaround is to ensure you have configured sufficient swap space on your device to accommodate all programs you are running concurrently.
Exhausting native memory is another issue related to JVM bugs. The
symptoms to look out for are the process size growing, but heap use
remaining relatively constant. Both the JIT compiler and nio ByteBuffers
can consume native memory. Oracle
bug 6210541 discusses a still-unsolved problem whereby the JVM
itself allocates a direct ByteBuffer in some circumstances while the
system never garbage collects, effectively eating native memory. Guy
Korland's blog discusses this problem here
and here.
As the JIT compiler consumes native memory, the lack of available memory
may manifest itself in the JIT as OutOfMemory exceptions such as
Exception in thread "CompilerThread0" java.lang.OutOfMemoryError:
requested xxx bytes for ChunkPool::allocate. Out of swap
space?
By default, Jetty allocates and manages its own pool of direct ByteBuffers for io if you configure the nio SelectChannelConnector. It also allocates MappedByteBuffers to memory-map static files via the DefaultServlet settings. However, you could be vulnerable to this JVM ByteBuffer allocation problem if you have disabled either of these options. For example, if you're on Windows, you may have disabled the use of memory-mapped buffers for the static file cache on the DefaultServlet to avoid the file-locking problem.
See an error or something missing? Contribute to this documentation at Github!