The JMS API provides a separate domain for each messaging approach, point-to-point or publish/subscribe. The point-to-point domain is built around the concept of queues, senders and receivers. The publish/subscribe domain is built around the concept of topic, publisher and subscriber. Additionally it provides a unified domain with common interfaces that enable the use of queue and topic. This domain defines the concept of producers and consumers. The classic sample uses a very simple configuration (centralized) made of one server hosting a queue and a topic. The server is administratively configured for accepting connection requests from the anonymous user.
JMS clustering aims to offer a solution for both the scalability and the high availability for the JMS accesses. This document gives an overview of the JORAM capabilities for clustering a JMS application in the J2EE context. The load-balancing and fail-over mechanisms are described and a user guide describing how to build such a configuration is provided. Further information is available in the JORAM documentation here.
The following information will be presented:
Getting started :
General Information:
The following scenario and general settings are proposed:
Why use clustered Topic?
A non hierarchical topic might also be distributed among many servers. Such a topic, to be considered as a single logical topic, is made of topic representatives, one per server. Such an architecture allows a publisher to publish messages on a representative of the topic. In the example, the publisher works with the representative on server 0. If a subscriber subscribed to any other representative (on server 1 in the example), it will get the messages produced by the publisher.
Load balancing of topics is very useful because it allows distributed topic subscriptions across the cluster.
How to configure everything is now explained in greater detail.
<?xml version="1.0"?> <config <domain name="D1"/> <server id="0" name="S0" hostname="localhost"> <network domain="D1" port="16301"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16010"/> </server> <server id="1" name="S1" hostname="localhost"> <network domain="D1" port="16302"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16020"/> </server> </config>
<Cluster> <Topic name="mdbTopic" serverId="0"> <freeReader/> <freeWriter/> <jndi name="mdbTopic"/> </Topic> <Topic name="mdbTopic" serverId="1"> <freeReader/> <freeWriter/> <jndi name="mdbTopic"/> </Topic> <freeReader/> <freeWriter/> <reader user="anonymous"/> <writer user="anonymous"/> </Cluster>
#!/bin/ksh export JONAS_BASE=$PWD/jb1 cp $JONAS_ROOT/examples/output/ejbjars/newsamplemdb.jar $JONAS_BASE/ejbjars/ jonas admin -a newsamplemdb.jar -n node1 export JONAS_BASE=$PWD/jb2 cp $JONAS_ROOT/examples/output/ejbjars/newsamplemdb.jar $JONAS_BASE/ejbjars/ jonas admin -a newsamplemdb.jar -n node2
jclient newsamplemdb.MdbClient
ClientContainer.info : Starting client... JMS client: tcf = TCF:localhost-16010 JMS client: topic = topic#1.1.1026 JMS client: tc = Cnx:#0.0.1026:5 MDBsample is OkIn addition, the following should appear on the other JOnAS:
Message received: Message6 MdbBean onMessage Message received: Message10 MdbBean onMessage Message received: Message9
The fact that both of the messages appear on the two different JOnASes shows the load balancing. It is more effective for Queue because only one part of the whole message will be in the first JOnAS base and the other part in the second JOnAS base.
Globally, the load balancing in the context of queues may be meaningless in comparison of load balancing topic. It would be a bit like load balancing a stateful session bean instance (which just requires failover). But the JORAM distributed architecture enables distributing the load of the queue access between several JORAM server nodes.
Here is a diagram of what is going to happen for the Queue and the message:A load balancing message queue may be needed for a high rate of messages. A clustered queue is a cluster of queues exchanging messages depending on their load. The example has a cluster of two queues. A heavy producer accesses its local queue and sends messages. It quickly becomes loaded and decides to forward messages to the other queue of its cluster which is not under heavy load.
For this case some parameters must be set:
For further information, see the JORAM documentation
The scenario for Queue is similar to the previous one. In fact, step 1) and step 2) are the same. Make the following changes for step 3):
<Cluster> <Queue name="mdbQueue" serverId="0" className="org.objectweb.joram.mom.dest.ClusterQueue"> <freeReader/> <freeWriter/> <property name="period" value="10000"/> <property name="producThreshold" value="50"/> <property name="consumThreshold" value="2"/> <property name="autoEvalThreshold" value="false"/> <property name="waitAfterClusterReq" value="1000"/> <jndi name="mdbQueue"/> </Queue> <Queue name="mdbQueue" serverId="1" className="org.objectweb.joram.mom.dest.ClusterQueue"> <freeReader/> <freeWriter/> <property name="period" value="10000"/> <property name="producThreshold" value="50"/> <property name="consumThreshold" value="2"/> <property name="autoEvalThreshold" value="false"/> <property name="waitAfterClusterReq" value="1000"/> <jndi name="mdbQueue"/> </Queue> <freeReader/> <freeWriter/> <reader user="anonymous"/> <writer user="anonymous"/> </Cluster>
An HA server is actually a group of servers, one of which is the master server that coordinates the other slave servers. An external server that communicates with the HA server is actually connected to the master server.
Each replicated JORAM server executes the same code as a standard server except for the communication with the clients.
In the example, the collocated clients use a client module (newsamplemdb). If the server replica is the master, then the connection is active enabling the client to use the HA JORAM server. If the replica is a slave, then the connection opening is blocked until the replica becomes the master.
Several files must be changed to create a JORAM HA configuration:
A clustered server is defined by the element "cluster". A cluster owns an identifier and a name defined by the attributes "id" and "name" (exactly like a standard server). Two properties must be defined:
<?xml version="1.0"?/> <config> <domain name="D1"/> <property name="Transaction" value="fr.dyade.aaa.util.NullTransaction"/> <cluster id="0" name="s0"> <property name="Engine" value="fr.dyade.aaa.agent.HAEngine" /> <property name="nbClusterExpected" value="1" />
For each replica, an element "server" must be added. The attribute "id" defines the identifier of the replica inside the cluster. The attribute "hostname" gives the address of the host where the replica is running. The network is used by the replica to communicate with external agent servers, i.e., servers located outside of the cluster and not replicas.
This is the entire configuration for the a3servers.xml file of the first JOnAS instance jb1:
<?xml version="1.0"?> <config< <domain name="D1"/> <property name="Transaction" value="fr.dyade.aaa.util.NullTransaction"/> <cluster id="0" name="s0"> <property name="Engine" value="fr.dyade.aaa.agent.HAEngine" /> <property name="nbClusterExpected" value="1" /> <server id="0" hostname="localhost"> <network domain="D1" port="16300"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16010"/> <service class="org.objectweb.joram.client.jms.ha.local.HALocalConnection"/> </server> <server id="1" hostname="localhost"> <network domain="D1" port="16301"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16020"/> <service class="org.objectweb.joram.client.jms.ha.local.HALocalConnection"/> </server> </cluster> </config>The cluster id = 0 and the name S0. It is exactly the same file for the second instance of JOnAS.
<?xml version="1.0"?> <JoramAdmin> <AdminModule> <collocatedConnect name="root" password="root"/> </AdminModule> <ConnectionFactory className="org.objectweb.joram.client.jms.ha.tcp.HATcpConnectionFactory"> <hatcp url="hajoram://localhost:16010,localhost:16020" reliableClass="org.objectweb.joram.client.jms.tcp.ReliableTcpClient"/> <jndi name="JCF"/> </ConnectionFactory> <ConnectionFactory className="org.objectweb.joram.client.jms.ha.tcp.QueueHATcpConnectionFactory"> <hatcp url="hajoram://localhost:16010,localhost:16020" reliableClass="org.objectweb.joram.client.jms.tcp.ReliableTcpClient"/> <jndi name="JQCF"/> </ConnectionFactory> <ConnectionFactory className="org.objectweb.joram.client.jms.ha.tcp.TopicHATcpConnectionFactory"> <hatcp url="hajoram://localhost:16010,localhost:16020" reliableClass="org.objectweb.joram.client.jms.tcp.ReliableTcpClient"/> <jndi name="JTCF"/> </ConnectionFactory>
Each connection factory has its own specification. One is in case of the Queue, one for Topic, and one for no define arguments. Each time the hatcp url must be entered, the url of the two instances. In the example, it is localhost:16010 and localhost:16020. It allows the client to change the instance when the first one is dead.
After this definition the user, the queue and topic can be created.
First, in order to recognize the cluster, a new parameter must be declared in these files.
<config-property> <config-property-name>ClusterId</config-property-name> <config-property-type>java.lang.Short</config-property-type> <config-property-value>0</config-property-value> </config-property>
Here the name is not really appropriate but in order to keep some coherence this name was used. In fact it represents a replica so it would have been better to call it replicaId.
Consequently, for the first JOnAS instance, copy the code just above. For the second instance, change the value to 1 (in order to signify this is another replica).
First launch the two JOnAS bases. Create a runHa.sh file in which the following code will be added:
export JONAS_BASE=$PWD/jb1 export CATALINA_BASE=$JONAS_BASE rm -f $JONAS_BASE/logs/* jonas start -win -n node1 -Ddomain.name=HA
Then do the same for the second JOnAS base. After that launch the script.
One of the two JOnAS bases (the one which is the slowest) will be in a waiting state when reading the joramAdmin.xml
JoramAdapter.start : - Collocated JORAM server has successfully started. JoramAdapter.start : - Reading the provided admin file: joramAdmin.xml
whereas the other one is launched successfully.
Then launch (through a script or not) the newsamplemdb example:
jclient -cp /JONAS_BASE/jb1/ejbjars/newsamplemdb.jar:/JONAS_ROOT/examples/classes -carolFile clientConfig/carol.properties newsamplemdb.MdbClient
Messages are sent on the JOnAS base which was launched before. Launch it again and kill the current JOnAS. The second JOnAS will automatically wake up and take care of the other messages.
This is a proposal for building an MDB clustering based application.
This is like contested Queues. i.e., there is more than one receiver on different machines receiving from the queue. This load balances the work done by the queue receiver, not the queue itself.
The HA mechanism can be mixed with the load balancing policy based on clustered destinations. The load is balanced between several HA servers. Each element of a clustered destination is deployed on a separate HA server.
Here is the supposed configuration (supposed because it has not been verifed).
The configuration must now be tested, as follows: