Howto: JOnAS and JORAM: Distributed Message Beans

A How-To Document for JOnAS version 3.3

By Rob Jellinghaus, robj at nimblefish dot com
16 February 2004

We are developing an enterprise application which uses messaging to provide scalable data processing.  Our application includes the following components:
We wanted to arrange the application as follows:
We wanted to use JOnAS and JORAM as the platform for developing this system.  We encountered a number of configuration challenges in developing the prototype.  This document describes those challenges and provides solutions.

We constructed our system using JOnAS 3.3.1 -- many of these issues will be addressed and simplified in future JOnAS and JORAM releases.  In the meantime we hope this document is helpful.

Thanks to Frederic Maistre (frederic.maistre at objectweb dot org) without whom we would never have figured it out!

JOnAS and JORAM: Configuration Basics

The JORAM runtime by default is launched collocated with the JOnAS server (see
http://jonas.objectweb.org/current/doc/PG_JmsGuide.html#Running).  However, in this configuration the JORAM lifetime is bound to the JOnAS lifetime.  If the local JOnAS process terminates, so will the local JORAM.  For reliability it is preferable to separate the JOnAS and JORAM processes, moreover given that a collocated JORAM server is by default non persistent.

The simplest configuration to separate JOnAS and JORAM, once they are non-collocated, is to create one JORAM instance on one machine in the system, and to couple all JOnAS instances to that one JORAM.  However, this also is failure-prone as if that one JORAM instance quits, all the JOnAS instances will lose their connection -- and will not afterwards reconnect!

Hence, the preferred solution is to have one JOnAS instance and one JORAM instance on each participating server.  The  JORAM instances must then be configured to communicate with each other.  Then each JOnAS instance must be configured to connect to its local JORAM instance.  This provides the greatest degree of recoverability, given that the JORAM instances are run in persistent mode (mode providing message persistence and thus, guarantee of delivery even in case of a server crash).

JORAM Topics and JOnAS Administration

The default configuration done by JOnAS is to create all queues and topics specified in jonas.properties when the JOnAS server starts up.  In a multi-server configuration, this is not desired.  JORAM topics and queues are hosted on one specific JORAM server.  Other JORAM servers wishing to use those topics and queues must use JNDI lookups to retrieve remote instances of those topics and queues, and must bind them locally.

Moreover, each JORAM server must be launched with knowledge of its identity in the system, and  each JOnAS instance must take different configuration actions depending on its role in the system.  Hence, the configuration of each machine must be customized.

Finally, the default permissions for running a distributed JORAM environment are not compatible with JOnAS:
All this configuration is not part of JOnAS's or JORAM'S default administration logic.  So it must be performed specifically by application code, which must perform this lookup and binding before any application JOnAS message operations can succeed.

The Solution

All these challenges can be addressed with the following set of configurations and supporting mechanisms.

Many variations are possible; we provide just the configuration that we have proved to work for us.  It is possible to rearrange the configuration significantly (to have some queues hosted on some machines, and other queues on other machines; to use a distributed JNDI lookup rather than a centralized one; etc.), but we have not as yet done so.

Throughout we use our server1,server2, and server3 names as concrete examples of the configuration.
  1. JORAM must be configured for distributed operation, roughly as described in section 3.2 of the JORAM adminstration guide (http://joram.objectweb.org/current/doc/joram3_7_ADMIN.pdf).
  2. Each separate server machine must have its own instance of JORAM and its own instance of JOnAS.
  3. Each JOnAS instance must be configured (via jonas.properties) to connect to its local JORAM.
  4. The "server 0" JORAM instance must be launched first, followed by its associated JOnAS.  This JOnAS instance, and this JOnAS instance only, is  configured to create the queues and topics used in the system.
  5. The second and third servers must then launch their JORAM and JOnAS (first JORAM, then JOnAS, then on to the next server) instances.
  6. Each JOnAS server must implement a custom service (see http://jonas.objectweb.org/current/doc/Services.html) which, on startup, will perform the appropriate configuration for that specific server.  We name this service the JoramDistributionService and provide source code for it below.  This performs all the customized configuration described in the permission section above.
  7. Since the configuration varies from server to server, the JoramDistributionService must read configuration information from a local configuration file.  We place this file in the $JONAS_BASE/conf directory, from where it is loadable as a classloader resource.  (This is a little-known JOnAS technique and it is not clear that it is guaranteed to work! -- if you know otherwise, please let me know: robj at nimblefish dot com.)
Summing up, the total configuration elements involved are:
  1. $JONAS_BASE/conf/a3servers.xml -- the JORAM configuration file which specifies the distributed JORAM configuration.  This file is identical on all participating servers.
  2. $JONAS_ROOT/bin/<platform>/JmsServer -- the JORAM launch script which starts up JORAM.  This varies on each server, the startup arguments (i.e. "0 ./s0", "1 ./s1", etc.) initialize the local JORAM instance with knowledge of its role in the JORAM configuration.
  3. $JONAS_BASE/conf/jonas.properties -- the JOnAS configuration file.  On all servers, this is extended to include the initialization of the JoramDistributionService, which must happen after the initialization of the "jms" service, but before the initialization of all deployment services (since application deployment involves subscribing message beans to queues and topics, which must be bound before the deployment can succeed).  On the server which is to host the application's topics and queues, the jonas.properties file also specifies those topics and queues; on all other servers, no topics or queues are created.  Finally, the jms service is configured as non-collocated on all servers, though customized to use the local JORAM instance's URL.
  4. $JONAS_BASE/conf/joramdist.properties-- the configuration file for the JoramDistributionService.  This contains properties specifying the local JORAM's port number, which server is hosting the application's topics and queues, and which topics and queues should be bound locally.
Note that the JoramDistributionService must be built and installed in $JONAS_BASE before JOnAS itself can be launched!

The Full Configuration

Here we provide examples of the relevant portions of the configuration for our system, to provide completely specific detail.  Our application uses only queues (at the moment).

a3servers.xml:
<?xml version="1.0"?>
<config>
<domain name="D1"/>
<server id="0" name="S0" hostname="server1">
<network domain="D1" port="16301"/>
<service class="fr.dyade.aaa.ns.NameService"/>
<service class="fr.dyade.aaa.mom.dest.AdminTopic"/>
<service class="fr.dyade.aaa.mom.proxies.tcp.ConnectionFactory"
args="16010 root root"/>
</server>
<server id="1" name="S1" hostname="server2">
<network domain="D1" port="16302"/>
<service class="fr.dyade.aaa.mom.dest.AdminTopic"/>
<service class="fr.dyade.aaa.mom.proxies.tcp.ConnectionFactory"
args="16011 root root"/>
</server>
<server id="2" name="S2" hostname="server3">
<network domain="D1" port="16303"/>
<service class="fr.dyade.aaa.mom.dest.AdminTopic"/>
<service class="fr.dyade.aaa.mom.proxies.tcp.ConnectionFactory"
args="16012 root root"/>
</server>
</config>
JmsServer: (the "export" is required for the script to work with the bash shell which we use on our Linux machines)

server 1:
export JAVA_OPTS="$JAVA_OPTS -DTransaction=fr.dyade.aaa.util.ATransaction -Dfr.dyade.aaa.agent.A3CONF_DIR=$JONAS_BASE/conf"
jclient -cp "$JONAS_ROOT/lib/jonas.jar:$JONAS_ROOT/lib/common/xml/xerces.jar" fr.dyade.aaa.agent.AgentServer 0 ./s0 "$@"
server 2:
export JAVA_OPTS="$JAVA_OPTS -DTransaction=fr.dyade.aaa.util.ATransaction -Dfr.dyade.aaa.agent.A3CONF_DIR=$JONAS_BASE/conf"
jclient -cp "$JONAS_ROOT/lib/jonas.jar:$JONAS_ROOT/lib/common/xml/xerces.jar" fr.dyade.aaa.agent.AgentServer 1 ./s1 "$@"
server 3:
export JAVA_OPTS="$JAVA_OPTS -DTransaction=fr.dyade.aaa.util.ATransaction -Dfr.dyade.aaa.agent.A3CONF_DIR=$JONAS_BASE/conf"
jclient -cp "$JONAS_ROOT/lib/jonas.jar:$JONAS_ROOT/lib/common/xml/xerces.jar" fr.dyade.aaa.agent.AgentServer 2 ./s2 "$@"
The Transaction argument specifies the persistence mode of the started JORAM server. The fr.dyade.aaa.util.ATransaction mode provides persistence, when starting a server (server "s1" for example) a persistence root (./s1) is created. If re-starting s1 after a crash, the info contained in this directory is used to retrieve the pre-crash state. For starting a bright new platform, all servers' persistence roots should be removed.

For starting non persistent servers (which provided better performances), the mode to set is fr.dyade.aaa.util.NullTransaction.

jonas.properties: (we show only the portions which vary from the default)

server 1:
jonas.services	registry,jmx,jtm,dbm,security,jms,resource,joramdist,ejb,web,ear
jonas.service.joramdist.class com.nimblefish.sdk.jonas.JoramDistributionService
jonas.service.jms.collocated false
jonas.service.jms.url joram://localhost:16010
jonas.service.jms.topics
jonas.service.jms.queues ApplicationQueue1,ApplicationQueue2,ApplicationQueue3
server 2:
jonas.services	registry,jmx,jtm,dbm,security,jms,resource,joramdist,ejb,web,ear
jonas.service.joramdist.class com.nimblefish.sdk.jonas.JoramDistributionService
jonas.service.jms.collocated false
jonas.service.jms.url joram://localhost:16011
#jonas.service.jms.topics
#jonas.service.jms.queues
server 3:
jonas.services	registry,jmx,jtm,dbm,security,jms,resource,joramdist,ejb,web,ear
jonas.service.joramdist.class com.nimblefish.sdk.jonas.JoramDistributionService
jonas.service.jms.collocated false
jonas.service.jms.url joram://localhost:16012
#jonas.service.jms.topics
#jonas.service.jms.queues
joramdist.properties:

server 1:
joram.createanonuser=false
joram.port=16010
joram.bindremotehost=localhost
joram.bindremotequeues=WorkManagerQueue,StatusManagerQueue,InternalWorkQueue,ExternalWorkQueue
server 2:
joram.createanonuser=true
joram.port=16011
joram.bindremotehost=server1
joram.bindremotequeues=WorkManagerQueue,StatusManagerQueue,InternalWorkQueue,ExternalWorkQueue
server 3:
joram.createanonuser=true
joram.port=16012
joram.bindremotehost=server1
joram.bindremotequeues=WorkManagerQueue,StatusManagerQueue,InternalWorkQueue,ExternalWorkQueue
It is a bit odd that server 1, which hosts the queues locally, has a "bindremotequeues" property.  In practice, the code which reads "bindremotequeues" actually also sets permissions, and then only binds the queues locally if bindremotehost is other than "localhost".  In other words, the code was originally written before the permissions issue came to light, and so the names are a bit stale :-)

The JoramDistributionService

The only remaining piece to describe is the JoramDistributionService itself.  Here it is.  As mentioned, we do not use topics in our system; adding code to handle topic permission and binding would be completely straightforward.

package com.nimblefish.sdk.jonas;

import org.objectweb.jonas.service.Service;
import org.objectweb.jonas.service.ServiceException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

import java.util.Enumeration;
import java.util.Properties;
import java.io.InputStream;
import java.io.IOException;

import javax.naming.Context;
import javax.jms.Destination;

import fr.dyade.aaa.joram.admin.AdminItf;

/**
* This class implements the JOnAS service interface and performs
* JOnAS-startup-time configuration actions relating to distributed JORAM servers.
*
* It uses a properties file named "joramdist.properties" to configure its activity;
* this configuration file must be in $JONAS_BASE/conf (which is part of the JOnAS
* classpath, and hence is reachable as a classloader resource from this class.)
* This of course can be changed at your discretion.
*
* See http://jonas.objectweb.org/current/doc/Services.html
*
* Written by Rob Jellinghaus (robj at nimblefish dot com) on 11 February 2004.
* Thanks very much to Frederic Maistre (frederic.maistre at objectweb dot org)
* for his indispensable and voluminous help.
* This file is hereby placed into the public domain for use by JOnAS users at their
* sole discretion; please include this comment text in your uses of this code.
*/
public class JoramDistributionService implements Service {
private static Log log = LogFactory.getLog(JoramDistributionService.class);

private boolean createAnonUser = false;
private int joramPort = -1;
private int joramInstance = -1;
private String joramBindHost = null;
private String[] joramBindQueues = null;

public void init(Context context) throws ServiceException {
log.info("JoramDistributionService initializing");
try {
InputStream propStream = JoramDistributionService.class.getClassLoader()
.getResourceAsStream("joramdist.properties");
Properties joramProperties = null;
joramProperties = new Properties();
joramProperties.load(propStream);
Enumeration props2 = joramProperties.propertyNames();
while (props2.hasMoreElements()) {
String s = (String) props2.nextElement();
log.info("joram.properties property: "+s+": "+joramProperties.getProperty(s));
}

if (joramProperties.containsKey("joram.createanonuser")
&& joramProperties.get("joram.createanonuser").equals("true")) {
createAnonUser = true;
}

if (joramProperties.containsKey("joram.port")) {
joramPort = Integer.parseInt(joramProperties.getProperty("joram.port"));
}

if (joramProperties.containsKey("joram.instance")) {
joramInstance = Integer.parseInt(joramProperties.getProperty("joram.instance"));
}

if (joramProperties.containsKey("joram.bindremotehost")) {
joramBindHost = joramProperties.getProperty("joram.bindremotehost");
}

if (joramProperties.containsKey("joram.bindremotequeues")) {
joramBindQueues = joramProperties.getProperty("joram.bindremotequeues").split(",");
}

} catch (IOException e) {
throw new ServiceException("Could not initialize JoramDistributionService", e);
}

}

public void start() throws ServiceException {
started = true;

if (joramPort == -1 && joramInstance == -1) {
log.info("No joram.port and/or joram.instance defined; performing no JORAM configuration.");
return;
}

try {

if (joramPort != -1) {
AdminItf admin = new fr.dyade.aaa.joram.admin.AdminImpl();
admin.connect("localhost", joramPort, "root", "root", 60);

if (createAnonUser) {
log.info("Creating JORAM anonymous user on localhost:"+joramPort+
" for instance "+joramInstance+"...");
admin.createUser("anonymous", "anonymous", joramInstance);
}

log.info("Created JORAM anonymous user.");

if (joramBindHost != null && joramBindQueues != null) {
log.info("Looking up JNDI queues from rmi://"+joramBindHost+":1099");
javax.naming.Context jndiCtx;

java.util.Hashtable env = new java.util.Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.rmi.registry.RegistryContextFactory");
env.put(Context.PROVIDER_URL, "rmi://"+joramBindHost+":1099");
jndiCtx = new javax.naming.InitialContext(env);

Object[] remoteTopics = new Object[joramBindQueues.length];
for (int i = 0; i < joramBindQueues.length; i++) {
String joramBindQueue = joramBindQueues[i];
remoteTopics[i] = jndiCtx.lookup(joramBindQueue);
log.debug("Got queue "+joramBindQueue+": "+remoteTopics[i]);

// open up all topics to everyone
admin.setFreeReading((Destination)remoteTopics[i]);
admin.setFreeWriting((Destination)remoteTopics[i]);
}

// if we are on local host, don't rebind
if (!joramBindHost.equals("localhost")) {
env.put(Context.PROVIDER_URL, "rmi://localhost:1099");
jndiCtx = new javax.naming.InitialContext(env);

for (int i = 0; i < joramBindQueues.length; i++) {
jndiCtx.bind(joramBindQueues[i], remoteTopics[i]);
log.debug("Bound "+joramBindQueues[i]+" in localhost context");
}
}
}

// Disconnecting the administrator.
admin.disconnect();
}

log.info("Completed JoramDistributionService startup successfully.");
} catch (Exception e) {
throw new ServiceException("Could not start JoramDistributionService", e);
}
}

private boolean started = false;
private String name;

public void stop() throws ServiceException {
started = false;
log.info("JoramDistributionService stopped");
}

public boolean isStarted() {
return started;
}

public void setName(String s) {
name = s;
}

public String getName() {
return name;
}
}

This needs to be built with a simple task that just compiles the class and places it in a JAR file, which then must be placed in the $JONAS_ROOT/lib/ext or $JONAS_BASE/lib/ext directories on each server before launching JOnAS.

Maintaining the configuration

This is clearly a fairly large number of small configuration files on each server.  We have automated the process of deploying the servers and their configuration via Ant.  Ant 1.6 includes native support for scp and ssh operations, as Ant tasks.  We have used these to build Ant tasks which can literally:
  1. install JOnAS on all our servers.
  2. create JONAS_BASE directories on each server,
  3. copy the server-specific configuration over to each server,
  4. build the JoramDistributionService and deploy it to each server,
  5. launch JORAM and JOnAS on each server in the proper order,
  6. build a customized version of our application for each type of server (i.e. a "frontend" version containing no message beans, and a "backend" version containing only message beans),
  7. deploy the appropriate application version to each of the three servers,
  8. and test the entire system using our system integration test suite.
In fact, we can do all of the above with a single Ant command.

Doing this with Ant is actually quite straightforward.  Without support for automating this deployment process, we would be quite concerned with the complexity of the configuration.  With automation, it is easy to place the whole configuration process under source code control, and it is easy to make controlled changes to the configuration of multiple machines.

Conclusion

Sorting out all these details was a long job (all the longer as we are located in San Francisco and Frederic, our lifeline, is in France, time-shifted by half a day!).  However, the whole system does work now, and works well.  JOnAS and JORAM are very impressive pieces of work, and the 4.0 releases of both promise to be even more so.

We look forward to continued use of the ObjectWeb platform, and we hope to continue to contribute constructively to the ObjectWeb community.  You may also be interested in our description of using JOnAS with Hibernate, at http://www.hibernate.org/166.html.

Sincerely,
Rob Jellinghaus (robj at nimblefish dot com)