Red Hat Docs  >  Manuals  >  Red Hat High Availability Server Manuals  > 

Chapter 7. Failover Services (FOS)

An Overview of FOS

The Piranha technology that is part of Red Hat High Availability Server consists of two types of clustering technologies:

With FOS, a person can set up a two-node Linux cluster consisting of an active system and a standby system. The nodes will monitor each other and, if the specified IP services (ftp, http, etc.) fail on the active system, the services are switched over and provided by the standby system. No additional systems or devices are required, and except for the temporary loss of the service(s) during failure, the failover process is transparent to end users.

A Piranha FOS cluster consists of two nodes: one functioning as the active provider of IP services to the public client network, and the other (called the inactive node) monitors those services and operates as a standby system. When any service on the active node becomes unresponsive, the inactive node becomes active and provides those services instead. Services are automatically started and stopped as needed during the transition. If the failed system comes back online it will become the new inactive node and monitor for failures on the now-active system.

Features

FOS consists of the following features:

  • Any IP service that supports a direct socket connection can be monitored and can be migrated to the inactive node during failover. This includes user-created services. Failover will occur when the service becomes unresponsive to FOS monitoring; this is independent of the number of users actually using the service. Possible services includes:

    • Web (http)

    • ftp, inetd

    • telnet

    • lpd

    • smtp/sendmail

    • ssh

    • LDAP

    • Firewall services

    • Other IP or user services

  • The system administrator can specify special send/expect script strings as part of service monitoring for increased assurance that the service is functioning properly.

  • FOS automatically starts and stops the monitored service as part of failover. System administrators can specify the start and stop commands or scripts (with arguments) for each monitored service. Custom scripts are also permitted.

  • Although the nodes need to be identical in terms of FOS operation and configuration, the clustered systems do not need to be dedicated entirely to FOS. They can be used for additional purposes beyond the services being monitored.

  • Although currently limited to clusters of two nodes, multiple independent clusters are possible [1] .

  • Each service can be defined as having a unique IP address, independent of the cluster's node addresses. This makes it easier to migrate existing environments where the IP services are already being provided by multiple servers, to having those services provided by a single "more fault-tolerant" cluster and keeping the changes transparent to end users.

Current Restrictions

FOS has the following restrictions:

  • Specified services are monitored and failover as a group. Services do not failover individually, nor are they load-balanced between the systems.

  • Only services supporting direct socket connections can be monitored. Services requiring connections to secondary ports apart from the listening port cannot be monitored. Services not currently supported include:

    • nfs

    • ntp/daytime

  • Shared data services (pop, imap, smtp) must have their data NFS-mounted from a common exported source in order to maintain data delivery.

  • Send/expect strings are limited to printable text characters only (plus \r and \n). Binary data cannot be explicitly sent or tested.

  • FOS must currently start and stop the monitored services as part of the failover process to ensure reliability. The services cannot already be running on the inactive node. This may reduce the usefulness of the inactive node while it is operating as a standby system.

  • Only two-node clusters are supported. Both nodes must be Linux systems.

  • Because several IP services are handled by the inetd daemon rather than individual daemons, there are situations where a non-FOS-configured service can be affected by FOS if inetd is involved. For example; if you choose to monitor ftp, and ftp is started and stopped by inetd, then when FOS shuts down inetd other services inetd provides (such as rsh) will also become unavailable on that system.

  • The Piranha Web Interface does not, at present, provide a means to copy the changed configuration file to the other nodes in the cluster, nor an option for restarting FOS so it can use the updated file. These operations must be done manually. This restriction is expected to be removed in a future release.

Software Location

The FOS software is contained in the Piranha RPM files. The main RPM (called piranha) contains the FOS binaries, the piranha-gui RPM contains the Piranha Web Interface for configuring the system, and the ipvsadm RPM contains the ipvsadm program, which is used to administer the virtual-server-specific aspects of Red Hat High Availability Server. FOS documentation and source code are also in the piranha-docs RPM and the piranha source RPM, respectively.

To obtain possible updates of these (or other) packages, visit http://www.redhat.com/apps/support/updates.html on the Red Hat website.

Base Requirements

FOS requires two identical (or near-identical) Linux systems, both accessible from the public client network. In addition, all services being handled by FOS must be configured identically on both systems. This is because both systems must operate from identical FOS configuration files.

The source configuration file is created and maintained using the Piranha Web Interface. This interface requires that Apache and PHP be installed and configured on both cluster nodes. Details concerning the Piranha Web Interface can be found in Chapter 9.

Notes

[1]

If clusters of more than two nodes are required, LVS must be used instead of FOS. For more information on LVS, please turn to Chapter 8.