clusvcadm
utility to relocate, migrate, or stop each HA service running on the node that is being deleted from the cluster. For information about using clusvcadm
, refer to Section 6.3, “Managing High-Availability Services”.
[root@example-01 ~]#service rgmanager stop
Stopping Cluster Service Manager: [ OK ] [root@example-01 ~]#service gfs2 stop
Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ] Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ] [root@example-01 ~]#service clvmd stop
Signaling clvmd to exit [ OK ] clvmd terminated [ OK ] [root@example-01 ~]#service cman stop
Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Waiting for corosync to shutdown: [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ] [root@example-01 ~]#
/etc/cluster/cluster.conf
to remove the clusternode
section of the node that is to be deleted. For example, in Example 6.1, “Three-node Cluster Configuration”, if node-03.example.com is supposed to be removed, then delete the clusternode
section for that node. If removing a node (or nodes) causes the cluster to be a two-node cluster, you can add the following line to the configuration file to allow a single node to maintain quorum (for example, if one node fails):
<cman two_node="1" expected_votes="1"/>
config_version
attribute by incrementing its value (for example, changing from config_version="2"
to config_version="3">
).
/etc/cluster/cluster.conf
.
cluster.rng
) by running the ccs_config_validate
command. For example:
[root@example-01 ~]# ccs_config_validate
Configuration validates
cman_tool version -r
command to propagate the configuration to the rest of the cluster nodes.
[root@example-01 ~]#service rgmanager stop
Stopping Cluster Service Manager: [ OK ] [root@example-01 ~]#service gfs2 stop
Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ] Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ] [root@example-01 ~]#service clvmd stop
Signaling clvmd to exit [ OK ] clvmd terminated [ OK ] [root@example-01 ~]#service cman stop
Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Waiting for corosync to shutdown: [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ] [root@example-01 ~]#
[root@example-01 ~]#service cman start
Starting cluster: Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] [root@example-01 ~]#service clvmd start
Starting clvmd: [ OK ] Activating VG(s): 2 logical volume(s) in volume group "vg_example" now active [ OK ] [root@example-01 ~]#service gfs2 start
Mounting GFS2 filesystem (/mnt/gfsA): [ OK ] Mounting GFS2 filesystem (/mnt/gfsB): [ OK ] [root@example-01 ~]#service rgmanager start
Starting Cluster Service Manager: [ OK ] [root@example-01 ~]#
cman_tools nodes
to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example:
[root@example-01 ~]# cman_tool nodes
Node Sts Inc Joined Name
1 M 548 2010-09-28 10:52:21 node-01.example.com
2 M 548 2010-09-28 10:52:21 node-02.example.com
clustat
utility, verify that the HA services are running as expected. In addition, clustat
displays status of the cluster nodes. For example:
[root@example-01 ~]#clustat
Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
node-02.example.com 2 Online, rgmanager
node-01.example.com 1 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:example_apache node-01.example.com started
service:example_apache2 (none) disabled