Load-balancing at the web level with mod_jk
Configure HTTP session replication with Tomcat
CMI Configuration (JNDI & EJB load balancing)
This depends on the distribution being used. Create a file tomcat_jk.conf
and workers.properties
in "$APACHE_HOME/conf/jk/".
tomcat_jk.conf:
# Location of the worker file
JkWorkersFile "conf/jk/workers.properties"
# Location of the log file
JkLogFile "conf/jk/mod_jk.log"
# Log level : debug, info, error or emerg
JkLogLevel emerg
# Shared Memory Filename ( Only for Unix platform )
# required by loadbalancer
JkShmFile conf/jk/jk.shm
# Assign specific URL to Tomcat workers
# A mount point from a context to a Tomcat worker
JkMount /sampleCluster2 loadbalancer
JkMount /sampleCluster2/* loadbalancer
# A mount point to the status worker
JkMount /jkmanager jkstatus
JkMount /jkmanager/* jkstatus
# Enable the Jk manager access only from localhost
<Location /jkmanager/>>
JkMount jkstatus
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
The following is an example of a workers.properties file:
# List the workers' names
worker.list=loadbalancer,jkstatus
# ----------------
# First worker
# ----------------
worker.worker1.port=9010
worker.worker1.host=server1
worker.worker1.type=ajp13
# Load balance factor
worker.worker1.lbfactor=1
# Define preferred failover node for worker 1
#worker.worker1.redirect=worker2
# ----------------
# Second worker
# ----------------
worker.worker2.port=9011
worker.worker2.host=server2
worker.worker2.type=ajp13
worker.worker2.lbfactor=1
# Disable worker2 for all requests except failover
#worker.worker2.disabled=True
# ----------------------
# Load Balancer worker
# ----------------------
worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=worker1,worker2
#Specifies whether requests with session's id should be routed to the same worker
#worker.loadbalancer.sticky_session=false
# ----------------------
# jkstatus worker
# ----------------------
worker.jkstatus.type=status
Explanations: | |
port: | Port number of the remote Tomcat instance listening for defined protocol requests |
host: | Host name or IP address of the backend JOnAS/Tomcat node. Can be set to localhost when the cluster members are collocated to a single machine. |
type: | Type of the worker (can be one of ajp13, ajp14, jni, status or lb) |
The status worker makes it possible to manage loadbalancing parameters and status through a web interface. In the above example, use URL http://localhost/jkmanager | |
lbfactor: | An integer number indicating how much a worker has to work |
example: worker1.lbfactor = 2 worker2.lbfactor = 1 worker1 receives 2 times more requests than worker2 | |
sticky_session: | Round robin where the mod_jk sends the request to a server. When session replication is activated in JOnAS, the session will not be lost. |
redirect: | worker name to use when the current worker is in error state |
disabled: | True/False - default status of the current worker |
The redirect/disabled parameters make it possible to define a failover configuration between 2 workers. In the above example, the lb redirects the requests to worker 2 if worker 1 is in error state. In other cases, worker 2 will not receive any requests, thus acting as a hot standby. |
Note: Refer to the workers.properties and workers howto documentation.
Other possible mod_jk configurations (includes mod_jk2, mod_jk, migration mod_jk2 to mod_jk)
Attention: mod_jk2 is deprecated.
<!-- Define an AJP 1.3 Connector on port 9010 -->
<Connector
port="9010" minProcessors="5"
maxProcessors="75"
acceptCount="10" debug="20" protocol="AJP/1.3"/>
Explanations: | |
port: | The TCP port number on which this Connector will create a server socket and await incoming connections. |
minProcessor: | The minimum number of processors to start at intialization time. If not specified, this attribute is set to 5. |
maxProcessor: | The maximum number of processors allowed. |
acceptCount: | The maximum queue length for incoming connection requests when all possible requests processing threads are in use. Any requests received when the queue is full will be refused. |
debug: | The debugging detail level of log messages generated by this component, with higher numbers creating more detailed output. |
protocol: | This attribute must be AJP/1.3 to use the AJP handler. |
Note: Refer to the AJP Connector documentation.
Explanations: | |
name: | Logical name of this Engine, used in log and error messages |
jvmRoute: | Uniquely identifies the Tomcat server to the Apache server Name has been specified in workers.properties |
defaultHost: | Identifies the Host that will process requests directed to host names on this server |
debug: | The level of debugging detail logged by this Engine |
Note: The jvmRoute name should be the same as the name of the
associated worker defined in workers.properties. This will ensure the
Session affinity.
Refer to the Engine Container documentation.
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" clusterName="myTomcatCluster" managerClassName="org.apache.catalina.cluster.session.DeltaManager" expireSessionsOnShutdown="false" useDirtyFlag="true"> <Membership className="org.apache.catalina.cluster.mcast.McastService" mcastAddr="228.0.0.4" mcastPort="45564" mcastFrequency="500" mcastDropTime="3000" debug = "9" /> <Receiver className="org.apache.catalina.cluster.tcp.ReplicationListener" tcpListenAddress="auto" tcpListenPort="4001" tcpSelectorTimeout="100" tcpThreadCount="6" /> <Sender className="org.apache.catalina.cluster.tcp.ReplicationTransmitter" replicationMode="pooled" /> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;" /> <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false" /> </Cluster>
Note1: The multicast address and multicast port of the Membership element must be identically configured for all JOnAS/Tomcat instances.
Note2: When the nodes are collocated to the same machine, the tcpListenPort of the Receiver element must be unique per JOnAS/Tomcat instance.
Note: More detailed information is available in the CMI guide .
In the case of EJB level clustering (CMI), the client may be either a fat Java client (e.g., a Swing application), or a Web application (i.e., Servlets/JSPs running within JOnAS). In the second case, the JOnAS server running the Web client should be configured in the same way as the other nodes of the cluster.
build.properties
of the application, set the protocol name to
cmi before compilation: $JONAS_BASE/conf/carol.properties
of each server (in the directory
$JONAS_BASE/conf) and of a fat Java client, set the protocol to cmi: $JONAS_BASE/conf/carol.properties
of each server of the cluster, configure
the url, the jgroups stack, the jgroups group name and the round-robin weight factor,
etc. carol.properties
file: $JONAS_BASE/conf/jgroups-cmi.xml
.
It describes the protocols list and their parameters, e.g., the UDP protocol contains
the multicast address parameter. UDP protocol is set by default and can be changed
dynamically.Note 1: The multicast address and port defined in the $JONAS_BASE/conf/jgroups-cmi.xml
and the group name defined in the $JONAS_BASE/conf/carol.properties
file must be the same for all
JOnAS nodes in the cluster.
Note 2: If Tomcat Replication associated to cmi is used, the multicast addresses of the two configurations must be different. Same requirement for the EJB high availability configuration.
Note 3: If the cluster is defined across several machines, the CMI url has to be set with the real ip address or ip alias and not the loopback address.Otherwise, the EJBs won't be reachable from the remote machines.
This section describes how to enable the EJB replication framework (SFSB).
# Set the ha service
jonas.services registry,jmx,jtm,db,security,resource,ejb,ws,web,ear,ha,discovery
# Set the name of the implementation class of the HA service.
jonas.service.ha.class org.objectweb.jonas.ha.HaServiceImpl
# Set the group communication framework to use
jonas.service.ha.gcl jgroups
# Set the JGroups configuration file name
jonas.service.ha.jgroups.conf jgroups-ha.xml
# Set the JGroups group name
jonas.service.ha.jgroups.groupname jonas-rep
# Set the SFSB backup info timeout. The info stored in the backup node is
# removed when the timer expires.
jonas.service.ha.timeout 600
# Set the datasource for the tx table
jonas.service.ha.datasource jdbc_1
create TABLE ha_transactions (txid varchar(60));
<jonas-session>
<ejb-name>DummySFSB</ejb-name>
<jndi-name>DummySFSB</jndi-name>
...
<cluster-replicated>true</cluster-replicated>
<cluster-home-distributor>Dummy_HomeDistributor.vm</cluster-home-distributor>
<cluster-remote-distributor>Dummy_RemoteDistributor.vm</cluster-remote-distributor>
</jonas-session>