|
Optimizing Asynchronous Communications for Scalability
The traditional servlet Request/Response I/O model, supported by the majority of JEE application servers, consumes one thread per connection for the entire duration of the Request-Response lifecycle. When using ICEfaces in asynchronous update mode in this scenario, a thread is effectively consumed on the server to maintain the asynchronous connection for each client browser connected to the server. In addition, user interaction will result in another transient request-response cycle occurring, which in turn will temporarily consume another thread on the server, resulting in up to two server threads being required to support each ICEfaces user. In practice, the number is something between one and two threads per user, depending on the frequency of interaction by the user. While this does not typically pose a problem when the number of concurrent users is relatively low (<100), it can become an issue when larger numbers of concurrent users must be supported. There are two established solutions to this limitation:
1. Increase the size of the server thread-pool used for servicing servlet requests. For example, Tomcat 5.5 uses a default thread pool size of 150 threads. Testing has shown that this is enough to support 100 plus concurrent ICEfaces users in asynchronous update mode. However, by increasing the Tomcat thread pool to a larger size (for example, 500 or 1,000), a much larger number of concurrent users can be supported. The limiting factor in this scenario is the number of threads that the hardware and software platform can effectively support. Depending on the capabilities of your server, increasing the thread-pool size is often all that is required to meet the scalabilty requirements of many applications.
2. Leverage servers that support Asynchronous Request Processing (ARP). Another approach that utilizes the available resources much more efficiently is to leverage ARP techniques for asynchronous communications supported by an ever-growing number of application servers. ARP implementations leverage non-blocking I/O (NIO) techniques to provide support for highly scalable asynchronous communications. In the ARP scenario, a thread-pool is used to service the asynchronous requests. When a request is received, a thread is temporarily allocated to service the request and is released to the pool while the application server processes the request and creates a response. When a response is ready, a thread is again allocated from the pool to send the response to the browser. ICEfaces asynchronous connections are often idle for relatively long periods of time between the request and the response while the connection waits for an asynchronous update on the server. For this reason, ARP is particularily well-suited for use with asynchronous ICEfaces applications and can provide scalability in terms of concurrent asynchronous users that is far greater than the traditional servlet I/O model on the same hardware.
ICEfaces can optionally support ARP configurations for the following servers:
GlassFish provides an asynchronous request processing (ARP) facility called "Grizzly" that ICEfaces can leverage to provide more efficient asynchronous communications to applications utilizing asynchronous update mode.
To configure ICEfaces to use Grizzly:
1. For GlassFish V2, add the cometSupport property to the http-listener in (for example) domains/domain1/config/domain.xml:
<http-listener acceptor-threads="1" address="0.0.0.0" blocking-enabled="false" default-virtual-server="server" enabled="true" family="inet" id="http-listener-1" port="8080" security-enabled="false" server-name="" xpowered-by="true"> <property name="cometSupport" value="true"/> </http-listener>For GlassFish V3, add the cometSupport property to the http-listener in (for example) glassfish/domains/domain1/config/domain.xml:
<http-listener default-virtual-server="server" server-name="" address="0.0.0.0" port="8080" id="http-listener-1"> <property name="cometSupport" value="true" /> </http-listener>ICEfaces will now auto-detect GlassFish plus Grizzly and will try to use the ARP capabilities of Grizzly if they are enabled. To disable using the ARP facilities of GlassFish Grizzly, the com.icesoft.faces.useARP property can be set to false inside the web.xml.
If this auto-configuration fails, ICEfaces will revert to using the traditional Thread Blocking async IO mechanism. You can verify that ICEfaces is using the Grizzly ARP mechanism by reviewing the ICEfaces log file (at INFO level). The following log messages may be present.
GlassFish ARP available: true Adapting to GlassFish ARP environment Failed to add Comet handler... (if ARP configuration fails only) Falling back to Thread Blocking environment (if ARP configuration fails only)The Jetty servlet container provides an asynchronous request processing (ARP) facility called "Continuations" that ICEfaces can leverage to provide more efficient asynchronous communications to applications utilizing asynchronous update mode.
When running in a Jetty container, ICEfaces automatically detects the presence of the Continuations API via reflection and self-configures to use it by default. To disable the use of Continuations when using Jetty, the following ICEfaces configuration parameter can be specified in the web.xml file:
<context-param> <param-name>com.icesoft.faces.useARP</param-name> <param-value>false</param-value> </context-param>Note: The previous Jetty-specific com.icesoft.faces.useJettyContinuations configuration parameter can still be used, but it has been deprecated.
Beginining with Tomcat 6.0, the Tomcat servlet container supports an optional non-blocking IO (NIO) facililty that ICEfaces can leverage to provide more efficient asynchronous communications to applications utilizing asynchronous update mode. JBoss 4.2 installations that utilize the default Tomcat 6.0 servlet container can also benefit from this configuration.
To configure Tomcat 6 to use the NIO connector:
<Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="60000" redirectPort="8443" /><servlet> <servlet-name>Tomcat Push Servlet</servlet-name> <servlet-class> com.icesoft.faces.webapp.http.servlet.TomcatPushServlet </servlet-class> <load-on-startup> 1 </load-on-startup> </servlet> <servlet-mapping> <servlet-name>Tomcat Push Servlet</servlet-name> <url-pattern>/block/receive-updated-views/*</url-pattern> </servlet-mapping>There is a known issue with JBoss 4.2 where the default JBoss ReplyHeaderFilter will fail with the above configuration. A work-around is to disable the ReplyHeaderFilter using the following configuration change in the ../server/default/deploy/jboss-web.deployer/conf/web.xml file.
<filter-mapping> <filter-name>CommonHeadersFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping><filter-mapping> <filter-name>CommonHeadersFilter</filter-name> <url-pattern>/</url-pattern> </filter-mapping>This will disable the filter; however, since it only adds JBoss branding to the HTTP headers this is likely acceptable.
Copyright 2005-2009. ICEsoft Technologies, Inc. |