JBoss.orgCommunity Documentation

Chapter 1. Introduction and Quick Start

1.1. Quick Start Guide
1.1.1. Initial Preparation
1.1.2. Launching a JBoss AS Cluster
1.1.3. Web Application Clustering Quick Start
1.1.4. EJB Session Bean Clustering Quick Start
1.1.5. Entity Clustering Quick Start

Clustering allows you to run an application on several parallel servers (a.k.a cluster nodes) while providing a single view to application clients. Load is distributed across different servers, and even if one or more of the servers fails, the application is still accessible via the surviving cluster nodes. Clustering is crucial for scalable enterprise applications, as you can improve performance by adding more nodes to the cluster. Clustering is crucial for highly available enterprise applications, as it is the clustering infrastructure that supports the redundancy needed for high availability.

The JBoss Application Server (AS) comes with clustering support out of the box, as part of the all configuration. The all configuration includes support for the following:

In this Clustering Guide we aim to provide you with an in depth understanding of how to use JBoss AS's clustering features. In this first part of the guide, the goal is to provide some basic "Quick Start" steps to encourage you to start experimenting with JBoss AS Clustering, and then to provide some background information that will allow you to understand how JBoss AS Clustering works. The next part of the guide then explains in detail how to use these features to cluster your JEE services. Finally, we provide some more details about advanced configuration of JGroups and JBoss Cache, the core technologies that underlie JBoss AS Clustering.

The goal of this section is to give you the minimum information needed to let you get started experimenting with JBoss AS Clustering. Most of the areas touched on in this section are covered in much greater detail later in this guide.

Preparing a set of servers to act as a JBoss AS cluster involves a few simple steps:

Beyond the above required steps, the following two optional steps are recommended to help ensure that your cluster is properly isolated from other JBoss AS clusters that may be running on your network:

See Section 10.2.2, “Isolating JGroups Channels” for more on isolating clusters.

The simplest way to start a JBoss server cluster is to start several JBoss instances on the same local network, using the -c all command line option for each instance. Those server instances will detect each other and automatically form a cluster.

Let's look at a few different scenarios for doing this. In each scenario we'll be creating a two node cluster, where the ServerPeerID for the first node is 1 and for the second node is 2. We've decided to call our cluster "DocsPartition" and to use 239.255.100.100 as our multicast address. These scenarios are meant to be illustrative; the use of a two node cluster shouldn't be taken to mean that is the best size for a cluster; it's just that's the simplest way to do the examples.

  • Scenario 1: Nodes on Separate Machines

    This is the most common production scenario. Assume the machines are named "node1" and "node2", while node1 has an IP address of 192.168.0.101 and node2 has an address of 192.168.0.102. Assume the "ServerPeerID" for node1 is 1 and for node2 it's 2. Assume on each machine JBoss is installed in /var/jboss.

    On node1, to launch JBoss:

    $ cd /var/jboss/bin
    $ ./run.sh -c all -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=1

    On node2, it's the same except for a different -b value and ServerPeerID:

    $ cd /var/jboss/bin
    $ ./run.sh -c all -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.102 -Djboss.messaging.ServerPeerID=2

    The -c switch says to use the all config, which includes clustering support. The -g switch sets the cluster name. The -u switch sets the multicast address that will be used for intra-cluster communication. The -b switch sets the address on which sockets will be bound. The -D switch sets system property jboss.messaging.ServerPeerId, from which JBoss Messaging gets its unique id.

  • Scenario 2: Two Nodes on a Single, Multihomed, Server

    Running multiple nodes on the same machine is a common scenario in a development environment, and is also used in production in combination with Scenario 1. (Running all the nodes in a production cluster on a single machine is generally not recommended, since the machine itself becomes a single point of failure.) In this version of the scenario, the machine is multihomed, i.e. has more than one IP address. This allows the binding of each JBoss instance to a different address, preventing port conflicts when the nodes open sockets.

    Assume the single machine has the 192.168.0.101 and 192.168.0.102 addresses assigned, and that the two JBoss instances use the same addresses and ServerPeerIDs as in Scenario 1. The difference from Scenario 1 is we need to be sure each AS instance has its own work area. So, instead of using the all config, we are going to use the node1 and node2 configs we copied from all in the previous section.

    To launch the first instance, open a console window and:

    $ cd /var/jboss/bin
    $ ./run.sh -c node1 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=1

    For the second instance, it's the same except for different -b and -c values and a different ServerPeerID:

    $ cd /var/jboss/bin
    $ ./run.sh -c node2 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.102 -Djboss.messaging.ServerPeerID=2
  • Scenario 3: Two Nodes on a Single, Non-Multihomed, Server

    This is similar to Scenario 2, but here the machine only has one IP address available. Two processes can't bind sockets to the same address and port, so we'll have to tell JBoss to use different ports for the two instances. This can be done by configuring the ServiceBindingManager service by setting the jboss.service.binding.set system property.

    To launch the first instance, open a console window and:

    $ cd /var/jboss/bin
    $ ./run.sh -c node1 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=1 \
        -Djboss.service.binding.set=ports-default

    For the second instance:

    $ cd /var/jboss/bin
    $ ./run.sh -c node2 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=2 \
        -Djboss.service.binding.set=ports-01

    This tells the ServiceBindingManager on the first node to use the standard set of ports (e.g. JNDI on 1099). The second node uses the "ports-01" binding set, which by default for each port has an offset of 100 from the standard port number (e.g. JNDI on 1199). See the conf/bootstrap/bindings.xml file for the full ServiceBindingManager configuration.

    Note that this setup is not advised for production use, due to the increased management complexity that comes with using different ports. But it is a fairly common scenario in development environments where developers want to use clustering but cannot multihome their workstations.

    Note

    Including -Djboss.service.binding.set=ports-default on the command line for node1 isn't technically necessary, since ports-default is the ... default. But using a consistent set of command line arguments across all servers is helpful to people less familiar with all the details.

That's it; that's all it takes to get a cluster of JBoss AS servers up and running.

JBoss AS supports clustered web sessions, where a backup copy of each user's HttpSession state is stored on one or more nodes in the cluster. In case the primary node handling the session fails or is shut down, any other node in the cluster can handle subsequent requests for the session by accessing the backup copy. Web tier clustering is discussed in detail in Chapter 8, HTTP Services.

There are two aspects to setting up web tier clustering:

  • Configuring an External Load Balancer. Web applications require an external load balancer to balance HTTP requests across the cluster of JBoss AS instances (see Section 2.2.2, “External Load Balancer Architecture” for more on why that is). JBoss AS itself doesn't act as an HTTP load balancer. So, you will need to set up a hardware or software load balancer. There are many possible load balancer choices, so how to configure one is really beyond the scope of a Quick Start. But see Section 8.1, “Configuring load balancing using Apache and mod_jk” for details on how to set up the popular mod_jk software load balancer.

  • Configuring Your Web Application for Clustering. This aspect involves telling JBoss you want clustering behavior for a particular web app, and it couldn't be simpler. Just add an empty distributable element to your application's web.xml file:

    <?xml version="1.0"?> 
    <web-app  xmlns="http://java.sun.com/xml/ns/javaee"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
              xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
                                  http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" 
              version="2.5">
              
        <distributable/>
        
    </web-app>

    Simply doing that is enough to get the default JBoss AS web session clustering behavior, which is appropriate for most applications. See Section 8.2, “Configuring HTTP session state replication” for more advanced configuration options.

One of the big improvements in the clustering area in JBoss AS 5 is the use of the new Hibernate/JBoss Cache integration for second level entity caching that was introduced in Hibernate 3.3. In the JPA/Hibernate context, a second level cache refers to a cache whose contents are retained beyond the scope of a transaction. A second level cache may improve performance by reducing the number of database reads. You should always load test your application with second level caching enabled and disabled to see whether it has a beneficial impact on your particular application.

If you use more than one JBoss AS instance to run your JPA/Hibernate application and you use second level caching, you must use a cluster-aware cache. Otherwise a cache on server A will still hold out-of-date data after activity on server B updates some entities.

JBoss AS provides a cluster-aware second level cache based on JBoss Cache. To tell JBoss AS's standard Hibernate-based JPA provider to enable second level caching with JBoss Cache, configure your persistence.xml as follows:

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
   http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
   version="1.0"> 
   <persistence-unit name="somename" transaction-type="JTA">
      <jta-data-source>java:/SomeDS</jta-data-source>
      <properties>
         <property name="hibernate.cache.use_second_level_cache" value="true"/>
         <property name="hibernate.cache.region.factory_class" 
                   value="org.hibernate.cache.jbc2.JndiMultiplexedJBossCacheRegionFactory"/>
         <property name="hibernate.cache.region.jbc2.cachefactory" value="java:CacheManager"/>
         <!-- Other configuration options ... -->
      </properties>
   </persistence-unit>
</persistence>
           

That tells Hibernate to use the JBoss Cache-based second level cache, but it doesn't tell it what entities to cache. That can be done by adding the org.hibernate.annotations.Cache annotation to your entity class:

package org.example.entities;
 
import java.io.Serializable;
import javax.persistence.Entity;
import org.hibernate.annotations.Cache;
import org.hibernate.annotations.CacheConcurrencyStrategy;
 
@Entity
@Cache (usage=CacheConcurrencyStrategy.TRANSACTIONAL)
public class Account implements Serializable
           

See Chapter 7, Clustered Entity EJBs for more advanced configuration options and details on how to configure the same thing for a non-JPA Hibernate application.

Note

Clustering can add significant overhead to a JPA/Hibernate second level cache, so don't assume that just because second level caching adds a benefit to a non-clustered application that it will be beneficial to a clustered application. Even if clustered second level caching is beneficial overall, caching of more frequently modified entity types may be beneficial in a non-clustered scenario but not in a clustered one. Always load test your application.