Red Hat Docs  >  Manuals  >  Red Hat High Availability Server Manuals  > 

Example — Setting Up a Five-node Cluster

This section describes step by step how to create a cluster of two LVS routers and three Web/FTP servers. First, collect the necessary information and set up the five systems as explained in the next section. Then, implement the example either by editing the lvs.cf using a text editor or by using the Piranha Web Interface.

Figure 8-7 shows the network that will exist after you've set up the LVS routers and real servers. All network addresses shown are for purposes of illustration only; you'll need to use addresses allocated to you from your network administrator.

Figure 8-7. Layout of the Example Network

Preliminary Setup

  1. If you've not done so already, obtain a virtual server IP address. In our example this will be 1.2.3.1. Requests for service at the LVS cluster will be addressed to a fully-qualified domain name associated with this address.

  2. Locate five servers and designate their roles:

    • One primary LVS router

    • One backup LVS router

    • Three real servers

    (Note that you'll also need a client system with a web browser, for testing purposes.)

    The LVS routers must be Linux boxes running Red Hat High Availability Server 1.0 or later. The real servers may be any platform running any operating system and Web server.

    Steps 3 through 14 set up the LVS routers.

  3. On each LVS router, install two ethernet adapter cards, eth0 and eth1.

  4. Install the product by following the on-screen displays, and by following the instructions in the installation section of this document.

  5. Log into both nodes as root and perform the following basic operations:

    • Execute the /usr/sbin/piranha-passwd script to set an access password for the Piranha Web Interface.

    • Edit the /etc/hosts files and add entries for each of the cluster nodes.

    • Edit /etc/hosts.allow, and enable access for the appropriate service(s) for all cluster nodes. Use the commented examples in the file as a guideline.

    • Edit the ~root/.rhosts files and add entries for each of the cluster nodes so that the root account can be used with rsh and rcp.

    • If desired, you may also want to set up ssh and scp in addition to (or instead of) using rsh and rcp. Follow the appropriate instructions for that software.

  6. Make sure that each system can ping the other by IP address and name. At this point, copying files between the systems using rcp (or scp if set up) when logged in as root should also work. As an example, the following command should work (assuming you will be using rcp):
    rcp myfile node2:/tmp/myfile
                  

  7. Configure Apache on both nodes by editing /etc/httpd/conf/httpd.conf, and setting the ServerName parameter appropriately. Start Apache by using the /etc/rc.d/init.d/httpd script, and passing the start or restart parameter as appropriate.

    Although the following should have been done for you after you've installed Red Hat High Availability Server, the following configuration sections must be set in order for the Piranha Web Interface to work properly. First, the following entry should be present in the /etc/httpd/conf/httpd.conf file:

    <Directory /home/httpd/html/piranha>
      Options All
      AllowOverride All
    </Directory>
                

    You should also find the following entries in the same file:

    LoadModule php3_module        modules/libphp3.so
    AddModule mod_php3.c
    …
    <IfModule mod_php3.c>
      AddType application/x-httpd-php3 .php3
      AddType application/x-httpd-php3-source .phps
    </IfModule>
                
  8. Log into the client system and start up the web browser. Access the URL http://xxxxx/piranha/ where xxxxxx is the hostname or IP address of the PRIMARY node in the cluster. You should see the configuration page for the Piranha software.

    Configure the software as needed for your setup, following the information detailed in other sections of this document. Your changes should be present in the file /etc/lvs.cf on that cluster node.

  9. Make sure the configuration files on all nodes are identical by copying the new file on the primary node to the other node by using the rcp or scp command (for example):
    rcp /etc/lvs.cf node:/etc/lvs.cf
                  
    If this does not work, you will have to investigate the configuration changes for the rsh or ssh software you made earlier.

    Changes to the configuration file will require that the file be re-copied to all the nodes, and that Piranha be stopped and restarted. (Note: A future release of Piranha may automate this process.)

  10. Create a public IP interface on eth0 and a private IP interface on eth1. The public interface device (eth0) is the heartbeat device. The virtual server address is aliased to this device.

    InterfacePrimary nodeBackup node
    eth0 1.2.3.2 1.2.3.3
    eth1 192.168.1.1 192.168.1.2

  11. Designate an IP address (192.168.1.254) for the router device (eth1) connecting the active LVS router to the private network. This floating IP address will be aliased to the router device as eth1:1, and will be the gateway to the private network and the default route used by each real server to communicate with the active router.

  12. On each LVS router:

    1. Enable packet forwarding. To do this at system boot, make sure the file /etc/sysctl.conf contains the line net.ipv4.ip_forward = 1. To enable packet forwarding without rebooting, as root issue this command:
      echo 1 > /proc/sys/net/ipv4/ip_forward
                        

    2. Enable packet defragmenting. To do this at system boot, make sure the file /etc/sysctl.conf contains the line net.ipv4.ip_always_defrag = 1. To enable packet defragmenting without rebooting, as root issue this command:
      echo 1 > /proc/sys/net/ipv4/ip_always_defrag
                        

    3. Masquerade the private network. Issue this command and put it in /etc/rc.d/rc.local:
      ipchains -A forward -j MASQ -s 192.168.1.0/24 -d 0.0.0.0
                        

  13. Decide whether to use rsh or ssh for synchronizing LVS cluster files. Verify that your choice is installed such that the LVS routers can log in to one another as root without administrator intervention (see Table 8-2). In this example, we will choose rsh.

  14. On each LVS router, make sure that Web service is configured and running, so the Piranha Web Interface can be used.

    NotePlease Note
     

    Note: It is recommended that the http service on the routers be configured to use a different port than that used by http on the real servers. This will help prevent accidental conflicts or use of the wrong http service by client browsers.

    Steps 15 through 18 set up the real servers.

  15. On each real server, install an ethernet network card, eth0, and create an IP address on the same private subnet as in Step 3. Next, assign a weight to each server indicating its processing capacity relative to that of the others. In this example, rs1 has twice the capacity (two processors) of rs2 and rs3.

     rs1rs2rs3
    eth0 192.168.1.3 192.168.1.4 192.168.1.5
    weight 2000 1000 1000

  16. On each real server, verify that the address named in Step 11 (192.168.1.254) is the real server's default route for communicating with the active LVS router.

  17. Decide which program (uptime, ruptime, rup) will be used by the active router to monitor the workload on the real servers. If you choose uptime, each LVS router must be able to connect with each real server without administrator intervention, using the program you selected in Step 13. See Table 8-2 for general instructions. If the selected program cannot be enabled (for example, one of the real servers is an NT box), the scheduling algorithms that use dynamic load information will still work but the user-assigned weights will be statically applied rather than dynamically adjusted based on load.

  18. Verify that each real server runs an installed and configured httpd server. Note that the real servers must listen on the same port (80 in the example) as the corresponding virtual server.

  19. Verify (for example, using telnet or ping) that each real server can reach hosts on the public LAN. If a real server on the private network cannot reach a host on your LAN, this probably indicates a communication failure between the server and the active router. See Steps 15 and 16.

  20. Determine the runtime parameters. For some of these, you may need to experiment over time to obtain optimal values. In this example, we will use the values listed.

    ParameterValueParameter Description
    heartbeat_port 539 Number of the heartbeat listening port on the primary and backup routers.
    keepalive 6 Number of seconds between heartbeats.
    deadtime 18 Number of seconds to wait for a non-responding router to respond before initiating failover. Should be a multiple of keepalive.
    timeout 10 Number of seconds to wait for a non-responding real server to respond before removing it from the routing table.
    reentry 180 When a real server that has been removed from the routing table starts responding again, wait this number of seconds before re-adding the server to the routing table.
    scheduler wlc Use the Weighted least-connections load-balancing algorithm (assign more jobs to servers that are least busy relative to their load-adjusted weight). See Table 8-1 for a description of the choices.
    port 80 Virtual server port number. The listening port selected for the virtual server is used on the real servers as well.

  21. Now we are ready to implement the example. You can do this by creating the configuration file with a text editor, or you can use the Piranha Web Interface configuration tool as explained in Chapter 9. Using the Piranha Web Interface is highly recommended.

    Here is the resulting file. The number to the right of most lines represents the step in the section called Preliminary Setup discussing this setting.

    primary = 1.2.3.2                    3
    service = lvs
    rsh_command = rsh                    13
    backup_active = 1
    backup =  1.2.3.3                    3
    heartbeat = 1
    heartbeat_port = 539                 20
    keepalive = 6                        20
    deadtime = 18                        20
    network = nat
    nat_router = 192.168.1.254 eth1:1    11
    virtual vs1 {                        15      
         active = 1
         address = 1.2.3.1 eth0:1        1
         port = 80                       20
         send = "GET / HTTP/1.0\r\n\r\n"
         expect = "HTTP"
         load_monitor = ruptime          17
         scheduler = wlc                 20
         timeout = 6                     20
         reentry = 15                    20
         server rs1 {                    15
             address = 192.168.1.3       15
             active = 1
             weight = 2000               15
         }
         server rs2 {                    15
             address = 192.168.1.4       15
             active = 1
             weight = 1000               15
         }
         server rs3 {                    15
             address = 192.168.1.5       15
             active = 1
             weight = 1000               15
         }
    }
                
  22. Copy the edited configuration file to the backup router:

    rcp /etc/lvs.cf bkuprtr:/etc/lvs.cf
                
  23. On the primary router, start the pulse daemon with this command:

    /etc/rc.d/init.d/pulse start
                
  24. Start pulse on the backup router, using the same command.

  25. Use the ps ax command (or run the Piranha Web Interface) to monitor the running system.