backend mgmt8443 balance source mode tcp # MASTERS 8443 server master-1.example.com 192.168.55.12:8443 check server master-2.example.com 192.168.55.13:8443 check
For information on adding master or node hosts to a cluster, see the Adding hosts to an existing cluster section in the Install and configuration guide.
Deprecating a master host removes it from the OpenShift Origin environment.
The reasons to deprecate or scale down master hosts include hardware re-sizing or replacing the underlying infrastructure.
Highly available OpenShift Origin environments require at least three master hosts and three etcd nodes. Usually, the master hosts are collocated with the etcd services. This topic describes the deprecation process for master hosts with or without collocated etcd.
You should create a backup of the configuration and data files prior to any important task such as deprecating a master host. See the Creating a master host backup and etcd tasks sections for more information.
Ensure that the master and etcd services are always deployed in odd numbers due to the voting mechanisms that take place among those services. |
Master hosts run important services, such as the OpenShift Origin API and controllers services (if multiple masters are present). In order to deprecate a master host, these services must be stopped.
The OpenShift Origin API service is an active/active service, so stopping the service does not affect the environment as long as the requests are sent to a separate master server. However, the OpenShift Origin controllers service is an active/passive service, where the services leverage etcd to decide the active master.
Deprecating a master host in a multi-master architecture includes removing the
master from the load balancer pool to avoid new connections attempting to use
that master. This process depends heavily on the load balancer used. The steps
below show the details of removing the master from haproxy
. In the event that
OpenShift Origin is running on a cloud provider, or using a F5
appliance, see
the specific product documents to remove the master from rotation.
Remove the backend
section in the /etc/haproxy/haproxy.cfg
configuration
file. For example, if deprecating a master named master-0.example.com
using
haproxy
, ensure the host name is removed from the following:
backend mgmt8443 balance source mode tcp # MASTERS 8443 server master-1.example.com 192.168.55.12:8443 check server master-2.example.com 192.168.55.13:8443 check
Then, restart the haproxy
service.
$ sudo systemctl restart haproxy
Once the master is removed from the load balancer, disable the API and controller services:
$ sudo systemctl disable --now atomic-openshift-master-api $ sudo systemctl disable --now atomic-openshift-master-controllers
Because the master host is a unschedulable OpenShift Origin node, follow the steps in the Deprecating a node host section.
Remove the master host from the [masters]
and [nodes]
groups in the
/etc/ansible/hosts
Ansible inventory file to avoid issues if running any
Ansible tasks using that inventory file:
...[OUTPUT OMITTED]... # host group for masters [masters] *master-0.example.com* master-1.example.com master-2.example.com # host group for nodes, includes region info [nodes] *master-0.example.com openshift_node_labels="{'role': 'master'}" openshift_hostname=master-0.example.com openshift_schedulable=false* ...[OUTPUT OMITTED]...
Deprecating the first master host listed in the Ansible inventory file requires extra precautions. At the time of writing, the |
The kubernetes
service includes the master host IPs as endpoints. To
verify that the master has been properly deprecated, review the kubernetes
service output and see if the deprecated master has been removed:
$ oc describe svc kubernetes -n default Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.111.0.1 Port: https 443/TCP Endpoints: 192.168.55.12:8443,192.168.55.13:8443 Port: dns 53/UDP Endpoints: 192.168.55.12:8053,192.168.55.13:8053 Port: dns-tcp 53/TCP Endpoints: 192.168.55.12:8053,192.168.55.13:8053 Session Affinity: ClientIP Events: <none>
Once the master has been successfully deprecated, the host where the master was previously running can be safely deleted.
To deprecate a master host running an etcd service, execute the previous steps in Deprecating a master host without collocated etcd, as well as the steps in Removing an etcd host.
In the event of replacing a broken master host, follow the process in Deprecating a master host without collocated etcd, then scale up the master hosts using the scale up Ansible playbook following the steps in Adding hosts to an existing cluster.
If the master host has a collocated etcd, use the Deprecating master host with collocated etcd steps, then the Adding hosts to an existing cluster as well as Scaling etcd.
The backup process is to be performed before any change to the infrastructure, such as a system update, upgrade, or any other significant modification. Backups should be performed on a regular basis to ensure the most recent data is available if a failure occurs.
OpenShift Origin files
The master instances run important services, such as the API, controllers. The /etc/origin/master
directory stores many important files:
The configuration the API, controllers, services, and more
Certificates generated by the installation
All cloud provider-related configuration
Keys and other authentication files, such as htpasswd
if using htpasswd
And more
The OpenShift Origin services can be customized to increase the log level, use
proxies, and so on. The configuration files are stored in the /etc/sysconfig
directory.
Because the masters are also unschedulable nodes, back up the entire
/etc/origin
directory.
Create a backup of the master host configuration files:
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc $ sudo cp -aR /etc/sysconfig/atomic-* ${MYBACKUPDIR}/etc/sysconfig/
On a single master cluster installation the configuration file is stored in the
|
At the time of writing, the |
Other important files that need to be considered when planning a backup include:
File |
Description |
|
Container Network Interface configuration (if used) |
|
Where the |
|
The input file for |
|
The |
|
|
|
|
|
Main configuration file for |
|
Different |
|
|
|
Certificates added to the system (i.e. for external registries) |
Create a backup of those files:
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \ ${MYBACKUPDIR}/etc/sysconfig/ $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/ $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \ ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
If a package is accidentally removed, or a file included in an rpm
package should be restored, having a list of rhel
packages installed on the
system can be useful.
If using Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems. |
To create a list of the current rhel
packages installed in the system:
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR} $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
If using the previous steps, the following files should now be present in the backup directory:
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n' etc/sysconfig/atomic-openshift-master etc/sysconfig/atomic-openshift-master-api etc/sysconfig/atomic-openshift-master-controllers etc/sysconfig/atomic-openshift-node etc/sysconfig/flanneld etc/sysconfig/iptables etc/sysconfig/docker-network etc/sysconfig/docker-storage etc/sysconfig/docker-storage-setup etc/sysconfig/docker-storage-setup.rpmnew etc/origin/master/ca.crt etc/origin/master/ca.key etc/origin/master/ca.serial.txt etc/origin/master/ca-bundle.crt etc/origin/master/master.proxy-client.crt etc/origin/master/master.proxy-client.key etc/origin/master/service-signer.crt etc/origin/master/service-signer.key etc/origin/master/serviceaccounts.private.key etc/origin/master/serviceaccounts.public.key etc/origin/master/openshift-master.crt etc/origin/master/openshift-master.key etc/origin/master/openshift-master.kubeconfig etc/origin/master/master.server.crt etc/origin/master/master.server.key etc/origin/master/master.kubelet-client.crt etc/origin/master/master.kubelet-client.key etc/origin/master/admin.crt etc/origin/master/admin.key etc/origin/master/admin.kubeconfig etc/origin/master/etcd.server.crt etc/origin/master/etcd.server.key etc/origin/master/master.etcd-client.key etc/origin/master/master.etcd-client.csr etc/origin/master/master.etcd-client.crt etc/origin/master/master.etcd-ca.crt etc/origin/master/policy.json etc/origin/master/scheduler.json etc/origin/master/htpasswd etc/origin/master/session-secrets.yaml etc/origin/master/openshift-router.crt etc/origin/master/openshift-router.key etc/origin/master/registry.crt etc/origin/master/registry.key etc/origin/master/master-config.yaml etc/origin/generated-configs/master-master-1.example.com/master.server.crt ...[OUTPUT OMITTED]... etc/origin/cloudprovider/openstack.conf etc/origin/node/system:node:master-0.example.com.crt etc/origin/node/system:node:master-0.example.com.key etc/origin/node/ca.crt etc/origin/node/system:node:master-0.example.com.kubeconfig etc/origin/node/server.crt etc/origin/node/server.key etc/origin/node/node-dnsmasq.conf etc/origin/node/resolv.conf etc/origin/node/node-config.yaml etc/origin/node/flannel.etcd-client.key etc/origin/node/flannel.etcd-client.csr etc/origin/node/flannel.etcd-client.crt etc/origin/node/flannel.etcd-ca.crt etc/pki/ca-trust/source/anchors/openshift-ca.crt etc/pki/ca-trust/source/anchors/registry-ca.crt etc/dnsmasq.conf etc/dnsmasq.d/origin-dns.conf etc/dnsmasq.d/origin-upstream-dns.conf etc/dnsmasq.d/node-dnsmasq.conf packages.txt
If needed, the files can be compressed to save space:
$ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* $ sudo tar -zcvf */backup/*$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR $ sudo rm -Rf ${MYBACKUPDIR}
To create any of these files from scratch, the openshift-ansible-contrib
repository contains the backup_master_node.sh
script, which performs the
previous steps. The script creates a directory on the host running the script
and copies all the files previously mentioned.
The |
The script can be executed on every master host with:
$ mkdir ~/git $ cd ~/git $ git clone https://github.com/openshift/openshift-ansible-contrib.git $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts $ ./backup_master_node.sh -h
After creating a backup of important master host files, if they become corrupted or accidentally removed, you can restore the file by copying back the file, ensuring it contains the proper content and restart the affected services.
Restore the /etc/origin/master/master-config.yaml
file:
# MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* # cp /etc/origin/master/master-config.yaml /etc/origin/master/master-config.yaml.old # cp /backup/$(hostname)/$(date +%Y%m%d)/origin/master/master-config.yaml /etc/origin/master/master-config.yaml # systemctl restart atomic-openshift-master-api # systemctl restart atomic-openshift-master-controllers
Restarting the master services can lead to downtime. However, you can remove the master host from the highly available load balancer pool, then perform the restore operation. Once the service has been properly restored, you can add the master host back to the load balancer pool. |
Perform a full reboot of the affected instance to restore the |
If the issue is an accidental package and its dependencies are removed, reinstall the package.
Get the list of the current installed packages:
$ rpm -qa | sort > /tmp/current_packages.txt
Get the differences:
$ diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt 1a2 > ansible-2.4.0.0-5.el7.noarch
Reinstall the missing packages:
# yum reinstall -y *ansible-2.4.0.0-5.el7.noarch*
Restore a system certificate by copying the certificate to the
/etc/pki/ca-trust/source/anchors/
directory and execute the update-ca-trust
:
$ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* $ sudo cp ${MYBACKUPDIR}/external_certificates/my_company.crt /etc/pki/ca-trust/source/anchors/ $ sudo update-ca-trust
Always ensure the user ID and group ID are restored when the files are copied
back, as well as the |
The procedure is the same whether deprecating an infrastructure node or an application node.
Ensure enough capacity is available to migrate the existing pods from the node set to be removed. Removing an infrastructure node is advised only when at least two more nodes will stay online after the infrastructure node is removed.
List all available nodes to find the node to deprecate:
$ oc get nodes NAME STATUS AGE VERSION ocp-infra-node-b7pl Ready 23h v1.6.1+5115d708d7 ocp-infra-node-p5zj Ready 23h v1.6.1+5115d708d7 ocp-infra-node-rghb Ready 23h v1.6.1+5115d708d7 ocp-master-dgf8 Ready,SchedulingDisabled 23h v1.6.1+5115d708d7 ocp-master-q1v2 Ready,SchedulingDisabled 23h v1.6.1+5115d708d7 ocp-master-vq70 Ready,SchedulingDisabled 23h v1.6.1+5115d708d7 ocp-node-020m Ready 23h v1.6.1+5115d708d7 ocp-node-7t5p Ready 23h v1.6.1+5115d708d7 ocp-node-n0dd Ready 23h v1.6.1+5115d708d7
As an example, this topic deprecates the ocp-infra-node-b7pl
infrastructure
node.
Describe the node and its running services:
$ oc describe node ocp-infra-node-b7pl Name: ocp-infra-node-b7pl Role: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=n1-standard-2 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=europe-west3 failure-domain.beta.kubernetes.io/zone=europe-west3-c kubernetes.io/hostname=ocp-infra-node-b7pl role=infra Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true Taints: <none> CreationTimestamp: Wed, 22 Nov 2017 09:36:36 -0500 Phase: Conditions: ... Addresses: 10.156.0.11,ocp-infra-node-b7pl Capacity: cpu: 2 memory: 7494480Ki pods: 20 Allocatable: cpu: 2 memory: 7392080Ki pods: 20 System Info: Machine ID: bc95ccf67d047f2ae42c67862c202e44 System UUID: 9762CC3D-E23C-AB13-B8C5-FA16F0BCCE4C Boot ID: ca8bf088-905d-4ec0-beec-8f89f4527ce4 Kernel Version: 3.10.0-693.5.2.el7.x86_64 OS Image: Employee SKU Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.12.6 Kubelet Version: v1.6.1+5115d708d7 Kube-Proxy Version: v1.6.1+5115d708d7 ExternalID: 437740049672994824 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default docker-registry-1-5szjs 100m (5%) 0 (0%) 256Mi (3%)0 (0%) default router-1-vzlzq 100m (5%) 0 (0%) 256Mi (3%)0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 200m (10%) 0 (0%) 512Mi (7%) 0 (0%) Events: <none>
The output above shows that the node is running two pods: router-1-vzlzq
and
docker-registry-1-5szjs
. Two more infrastructure nodes are available to migrate these two pods.
The cluster described above is a highly available cluster, this means both the
|
Mark a node as unschedulable and evacuate all of its pods:
$ oc adm drain ocp-infra-node-b7pl --delete-local-data node "ocp-infra-node-b7pl" cordoned WARNING: Deleting pods with local storage: docker-registry-1-5szjs pod "docker-registry-1-5szjs" evicted pod "router-1-vzlzq" evicted node "ocp-infra-node-b7pl" drained
If the pod has attached local storage (for example, EmptyDir
), the
--delete-local-data
option must be provided. Generally, pods running in
production should use the local storage only for temporary or cache files, but
not for anything important or persistent. For regular storage, applications
should use object storage or persistent volumes. In this case, the
docker-registry
pod’s local storage is empty, because the object storage is
used instead to store the container images.
The above operation deletes existing pods running on the node. Then, new pods are created according to the replication controller. In general, every application should be deployed with a deployment configuration, which creates pods using the replication controller.
|
The example below shows the output of the replication controller of the registry:
$ oc describe rc/docker-registry-1 Name: docker-registry-1 Namespace: default Selector: deployment=docker-registry-1,deploymentconfig=docker-registry,docker-registry=default Labels: docker-registry=default openshift.io/deployment-config.name=docker-registry Annotations: ... Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: deployment=docker-registry-1 deploymentconfig=docker-registry docker-registry=default Annotations: openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=docker-registry openshift.io/deployment.name=docker-registry-1 Service Account: registry Containers: registry: Image: openshift3/ose-docker-registry:v3.6.173.0.49 Port: 5000/TCP Requests: cpu: 100m memory: 256Mi Liveness: http-get https://:5000/healthz delay=10s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get https://:5000/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Environment: REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: tyGEnDZmc8dQfioP3WkNd5z+Xbdfy/JVXf/NLo3s/zE= REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA: false REGISTRY_HTTP_TLS_KEY: /etc/secrets/registry.key OPENSHIFT_DEFAULT_REGISTRY: docker-registry.default.svc:5000 REGISTRY_CONFIGURATION_PATH: /etc/registry/config.yml REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/registry.crt Mounts: /etc/registry from docker-config (rw) /etc/secrets from registry-certificates (rw) /registry from registry-storage (rw) Volumes: registry-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: registry-certificates: Type: Secret (a volume populated by a Secret) SecretName: registry-certificates Optional: false docker-config: Type: Secret (a volume populated by a Secret) SecretName: registry-config Optional: false Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 49m 49m 1 replication-controller Normal SuccessfulCreate Created pod: docker-registry-1-dprp5
The event at the bottom of the output displays information about new pod creation. So, when listing all pods:
$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-dprp5 1/1 Running 0 52m docker-registry-1-kr8jq 1/1 Running 0 1d docker-registry-1-ncpl2 1/1 Running 0 1d registry-console-1-g4nqg 1/1 Running 0 1d router-1-2gshr 0/1 Pending 0 52m router-1-85qm4 1/1 Running 0 1d router-1-q5sr8 1/1 Running 0 1d
The docker-registry-1-5szjs
and router-1-vzlzq
pods that were running on
the now deprecated node are no longer available. Instead, two new pods have been
created: docker-registry-1-dprp5
and router-1-2gshr
. As shown above, the new
router pod is router-1-2gshr
, but is in the Pending
state. This is because
every node can be running only on one single router and is bound to the ports 80
and 443 of the host.
When observing the newly created registry pod, the example below shows that
the pod has been created on the ocp-infra-node-rghb
node, which is different
from the deprecating node:
$ oc describe pod docker-registry-1-dprp5 Name: docker-registry-1-dprp5 Namespace: default Security Policy: hostnetwork Node: ocp-infra-node-rghb/10.156.0.10 ...
The only difference between deprecating the infrastructure and the application node is that once the infrastructure node is evacuated, and if there is no plan to replace that node, the services running on infrastructure nodes can be scaled down:
$ oc scale dc/router --replicas 2 deploymentconfig "router" scaled $ oc scale dc/docker-registry --replicas 2 deploymentconfig "docker-registry" scaled
Now, every infrastructure node is running only one kind of each pod:
$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-kr8jq 1/1 Running 0 1d docker-registry-1-ncpl2 1/1 Running 0 1d registry-console-1-g4nqg 1/1 Running 0 1d router-1-85qm4 1/1 Running 0 1d router-1-q5sr8 1/1 Running 0 1d $ oc describe po/docker-registry-1-kr8jq | grep Node: Node: ocp-infra-node-p5zj/10.156.0.9 $ oc describe po/docker-registry-1-ncpl2 | grep Node: Node: ocp-infra-node-rghb/10.156.0.10
To provide a full highly available cluster, at least three infrastructure nodes should always be available. |
To verify that the scheduling on the node is disabled:
$ oc get nodes NAME STATUS AGE VERSION ocp-infra-node-b7pl Ready,SchedulingDisabled 1d v1.6.1+5115d708d7 ocp-infra-node-p5zj Ready 1d v1.6.1+5115d708d7 ocp-infra-node-rghb Ready 1d v1.6.1+5115d708d7 ocp-master-dgf8 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7 ocp-master-q1v2 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7 ocp-master-vq70 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7 ocp-node-020m Ready 1d v1.6.1+5115d708d7 ocp-node-7t5p Ready 1d v1.6.1+5115d708d7 ocp-node-n0dd Ready 1d v1.6.1+5115d708d7
And that the node does not contain any pods:
$ oc describe node ocp-infra-node-b7pl Name: ocp-infra-node-b7pl Role: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=n1-standard-2 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=europe-west3 failure-domain.beta.kubernetes.io/zone=europe-west3-c kubernetes.io/hostname=ocp-infra-node-b7pl role=infra Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true Taints: <none> CreationTimestamp: Wed, 22 Nov 2017 09:36:36 -0500 Phase: Conditions: ... Addresses: 10.156.0.11,ocp-infra-node-b7pl Capacity: cpu: 2 memory: 7494480Ki pods: 20 Allocatable: cpu: 2 memory: 7392080Ki pods: 20 System Info: Machine ID: bc95ccf67d047f2ae42c67862c202e44 System UUID: 9762CC3D-E23C-AB13-B8C5-FA16F0BCCE4C Boot ID: ca8bf088-905d-4ec0-beec-8f89f4527ce4 Kernel Version: 3.10.0-693.5.2.el7.x86_64 OS Image: Employee SKU Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.12.6 Kubelet Version: v1.6.1+5115d708d7 Kube-Proxy Version: v1.6.1+5115d708d7 ExternalID: 437740049672994824 Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 0 (0%) 0 (0%) 0 (0%) 0 (0%) Events: <none>
Remove the infrastructure instance from the backend
section in the /etc/haproxy/haproxy.cfg
configuration file:
backend router80 balance source mode tcp server infra-1.example.com 192.168.55.12:80 check server infra-2.example.com 192.168.55.13:80 check backend router443 balance source mode tcp server infra-1.example.com 192.168.55.12:443 check server infra-2.example.com 192.168.55.13:443 check
Then, restart the haproxy
service.
$ sudo systemctl restart haproxy
Remove the node from the cluster after all pods are evicted with command:
$ oc delete node ocp-infra-node-b7pl node "ocp-infra-node-b7pl" deleted
$ oc get nodes NAME STATUS AGE VERSION ocp-infra-node-p5zj Ready 1d v1.6.1+5115d708d7 ocp-infra-node-rghb Ready 1d v1.6.1+5115d708d7 ocp-master-dgf8 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7 ocp-master-q1v2 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7 ocp-master-vq70 Ready,SchedulingDisabled 1d v1.6.1+5115d708d7 ocp-node-020m Ready 1d v1.6.1+5115d708d7 ocp-node-7t5p Ready 1d v1.6.1+5115d708d7 ocp-node-n0dd Ready 1d v1.6.1+5115d708d7
For more information on evacuating and draining pods or nodes, see Node maintenance section. |
In the event that a node would need to be added in place of the deprecated node, follow the Adding hosts to an existing cluster section.
The backup process is to be performed before any change to the infrastructure, such as a system update, upgrade, or any other significant modification. Backups should be performed on a regular basis to ensure the most recent data is available if a failure occurs.
OpenShift Origin files
Node instances run applications in the form of pods, which are based on
containers. The /etc/origin/
and /etc/origin/node
directories house
important files, such as:
The configuration of the node services
Certificates generated by the installation
Cloud provider-related configuration
Keys and other authentication files, such as the dnsmasq
configuration
The OpenShift Origin services can be customized to increase the log level, use
proxies, and more, and the configuration files are stored in the
/etc/sysconfig
directory.
Create a backup of the node configuration files:
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc $ sudo cp -aR /etc/sysconfig/atomic-openshift-node ${MYBACKUPDIR}/etc/sysconfig/
OpenShift Origin uses specific files that must be taken into account when planning the backup policy, including:
File |
Description |
|
Container Network Interface configuration (if used) |
|
Where the |
|
The input file for |
|
The |
|
|
|
|
|
Main configuration file for |
|
Different |
|
|
|
Certificates added to the system (i.e. for external registries) |
To create those files:
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \ ${MYBACKUPDIR}/etc/sysconfig/ $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/ $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \ ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
If a package is accidentally removed, or a file included in an rpm
package should be restored, having a list of rhel
packages installed on the
system can be useful.
If using Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems. |
To create a list of the current rhel
packages installed in the system:
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR} $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
The following files should now be present in the backup directory:
$ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n' etc/sysconfig/atomic-openshift-node etc/sysconfig/flanneld etc/sysconfig/iptables etc/sysconfig/docker-network etc/sysconfig/docker-storage etc/sysconfig/docker-storage-setup etc/sysconfig/docker-storage-setup.rpmnew etc/origin/node/system:node:app-node-0.example.com.crt etc/origin/node/system:node:app-node-0.example.com.key etc/origin/node/ca.crt etc/origin/node/system:node:app-node-0.example.com.kubeconfig etc/origin/node/server.crt etc/origin/node/server.key etc/origin/node/node-dnsmasq.conf etc/origin/node/resolv.conf etc/origin/node/node-config.yaml etc/origin/node/flannel.etcd-client.key etc/origin/node/flannel.etcd-client.csr etc/origin/node/flannel.etcd-client.crt etc/origin/node/flannel.etcd-ca.crt etc/origin/cloudprovider/openstack.conf etc/pki/ca-trust/source/anchors/openshift-ca.crt etc/pki/ca-trust/source/anchors/registry-ca.crt etc/dnsmasq.conf etc/dnsmasq.d/origin-dns.conf etc/dnsmasq.d/origin-upstream-dns.conf etc/dnsmasq.d/node-dnsmasq.conf packages.txt
If needed, the files can be compressed to save space:
$ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* $ sudo tar -zcvf */backup/*$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR $ sudo rm -Rf ${MYBACKUPDIR}
To create any of these files from scratch, the openshift-ansible-contrib
repository contains the backup_master_node.sh
script, which performs the
previous steps. The script creates a directory on the host running the script
and copies all the files previously mentioned.
The |
The script can be executed on every master host with:
$ mkdir ~/git $ cd ~/git $ git clone https://github.com/openshift/openshift-ansible-contrib.git $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts $ ./backup_master_node.sh -h
After creating a backup of important node host files, if they become corrupted or accidentally removed, you can restore the file by copying back the file, ensuring it contains the proper content and restart the affected services.
Restore the /etc/origin/node/node-config.yaml
file:
# MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) # cp /etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml.old # cp /backup/$(hostname)/$(date +%Y%m%d)/etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml # systemctl restart atomic-openshift-node
Restarting the services can lead to downtime. See Node maintenance, for tips on how to ease the process. |
Perform a full reboot of the affected instance to restore the |
If the issue is an accidental package and its dependencies are removed, reinstall the package.
Get the list of the current installed packages:
$ rpm -qa | sort > /tmp/current_packages.txt
Get the differences:
$ diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt 1a2 > ansible-2.4.0.0-5.el7.noarch
Reinstall the missing packages:
# yum reinstall -y ansible-2.4.0.0-5.el7.noarch
Restore a system certificate by copying the certificate to the
/etc/pki/ca-trust/source/anchors/
directory and execute the update-ca-trust
:
$ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* $ sudo cp ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/my_company.crt /etc/pki/ca-trust/source/anchors/ $ sudo update-ca-trust
Always ensure proper user ID and group ID are restored when the files are copied
back, as well as the |
See Managing nodes or Managing pods topics for various node management options. These include:
etcd is the key value store for all object definitions, as well as the persistent master state. Other components watch for changes, then bring themselves into the desired state.
OpenShift Origin versions prior to 3.5 used etcd version 2 (v2), while 3.5 and later use version 3 (v3). The data model between the two versions of etcd is different. etcd v3 can use both the v2 and v3 data model, whereas etcd v2 can only use the v2 data model. In an etcd v3 server, the v2 and v3 data stores exist in parallel and are independent.
For both v2 and v3 operations, you can use the ETCDCTL_API
environment
variable to use the proper API:
$ etcdctl -v etcdctl version: 3.2.5 API version: 2 $ ETCDCTL_API=3 etcdctl version etcdctl version: 3.2.5 API version: 3.2
See Migrating etcd Data (v2 to v3) section for information about how to migrate to v3.
The etcd backup process is composed of two different procedures:
Configuration backup: Including the required etcd configuration and certificates
Data backup: Including both v2 and v3 data model.
The data backup procedure can be done on any host that has connectivity to the
etcd cluster, where the proper certificates are provided, and where the
etcdctl
tool is installed.
The backup files must be copied to an external system, ideally outside the OpenShift Origin environment, and then encrypted. |
The etcd configuration files to be preserved are all stored in the /etc/etcd
directory of the instances where etcd is running. This includes the etcd
configuration file (/etc/etcd/etcd.conf
) and the required certificates for
cluster communication. All those files are generated at installation time by the
Ansible installer.
To back up the etcd configuration:
$ ssh master-0 # mkdir -p /backup/etcd-config-$(date +%Y%m%d)/ # cp -R /etc/etcd/ /backup/etcd-config-$(date +%Y%m%d)/
The backup is to be performed on every etcd member of the cluster as the certificates and configuration files are unique. |
The restore procedure for etcd configuration files replaces the appropriate files, then restarts the service.
If an etcd host has become corrupted and the /etc/etcd/etcd.conf
file is lost,
restore it using:
$ ssh master-0 # cp /backup/yesterday/master-0-files/etcd.conf /etc/etcd/etcd.conf # restorecon -Rv /etc/etcd/etcd.conf # systemctl restart etcd.service
In this example, the backup file is stored in the
/backup/yesterday/master-0-files/etcd.conf
path where it can be used as an
external NFS share, S3 bucket, etc.
The OpenShift Origin installer creates aliases to avoid typing all the
flags named However, the |
Before backing up etcd:
etcdctl
binaries should be available or, in containerized installations, the rhel7/etcd
container should be available
Ensure connectivity with the etcd cluster (port 2379/tcp)
Ensure the proper certificates to connect to the etcd cluster
To ensure the etcd cluster is working, check its health:
# etcdctl --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --peers="https://*master-0.example.com*:2379,\ https://*master-1.example.com*:2379,\ https://*master-2.example.com*:2379"\ cluster-health member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379 member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379 member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379 cluster is healthy
Or if using the etcd v3 API:
# ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \ --key=/etc/etcd/peer.key \ --cacert="/etc/etcd/ca.crt" \ --endpoints="https://*master-0.example.com*:2379,\ https://*master-1.example.com*:2379,\ https://*master-2.example.com*:2379" endpoint health https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
Check the member list:
# etcdctl2 member list 2a371dd20f21ca8d: name=master-1.example.com peerURLs=https://192.168.55.12:2380 clientURLs=https://192.168.55.12:2379 isLeader=false 40bef1f6c79b3163: name=master-0.example.com peerURLs=https://192.168.55.8:2380 clientURLs=https://192.168.55.8:2379 isLeader=false 95dc17ffcce8ee29: name=master-2.example.com peerURLs=https://192.168.55.13:2380 clientURLs=https://192.168.55.13:2379 isLeader=true
Or, if using etcd the v3 API:
# etcdctl3 member list 2a371dd20f21ca8d, started, master-1.example.com, https://192.168.55.12:2380, https://192.168.55.12:2379 40bef1f6c79b3163, started, master-0.example.com, https://192.168.55.8:2380, https://192.168.55.8:2379 95dc17ffcce8ee29, started, master-2.example.com, https://192.168.55.13:2380, https://192.168.55.13:2379
While the The |
Perform the backup:
# mkdir -p */backup/etcd-$(date +%Y%m%d)* # systemctl stop etcd.service # etcdctl2 backup \ --data-dir /var/lib/etcd \ --backup-dir */backup/etcd-$(date +%Y%m%d)* # cp /var/lib/etcd/member/snap/db */backup/etcd-$(date +%Y%m%d)* # systemctl start etcd.service
While stopping the etcd service is not strictly necessary, doing so ensures that the etcd data is fully synchronized.
The etcdctl2 backup
command creates etcd v2 data backup where copying the db
file while the etcd service is not running is equivalent to running etcdctl3
snapshot
for etcd v3 data backup:
# mkdir -p */backup/etcd-$(date +%Y%m%d)* # etcdctl3 snapshot save */backup/etcd-$(date +%Y%m%d)*/db Snapshot saved at /backup/etcd-<date>/db # systemctl stop etcd.service # etcdctl2 backup \ --data-dir /var/lib/etcd \ --backup-dir */backup/etcd-$(date +%Y%m%d)* # systemctl start etcd.service
The |
In this example, a /backup/etcd-<date>/
directory is created, where <date>
represents the current date, which must be an external NFS share, S3 bucket, or
any external storage location.
In the case of an all-in-one cluster, the etcd data directory is located in
/var/lib/origin/openshift.local.etcd
The following restores healthy data files and starts the etcd cluster as a single node, then adds the rest of the nodes in case an etcd cluster is required.
Stop all etcd services:
# systemctl stop etcd.service
Clean the etcd data directories to ensure the proper backup is restored, but keeping the running copy:
# mv /var/lib/etcd /var/lib/etcd.old # mkdir /var/lib/etcd # chown -R etcd.etcd /var/lib/etcd/ # restorecon -Rv /var/lib/etcd/
Alternatively, you can wipe the etcd data directory:
# rm -Rf /var/lib/etcd/*
In case an all-in-one cluster, the etcd data directory is located in
|
Restore a healthy backup data file to one of the etcd nodes:
# cp -R /backup/etcd-xxx/* /var/lib/etcd/ # mv /var/lib/etcd/db /var/lib/etcd/member/snap/db
Perform this step on all etcd hosts (including master hosts collocated with etcd).
Run the etcd service, forcing a new cluster.
This creates a custom file for the etcd service, which overwrites the execution
command adding the --force-new-cluster
option:
# mkdir -p /etc/systemd/system/etcd.service.d/ # echo "[Service]" > /etc/systemd/system/etcd.service.d/temp.conf # echo "ExecStart=" >> /etc/systemd/system/etcd.service.d/temp.conf # sed -n '/ExecStart/s/"$/ --force-new-cluster"/p' \ /usr/lib/systemd/system/etcd.service \ >> /etc/systemd/system/etcd.service.d/temp.conf # systemctl daemon-reload # systemctl restart etcd
Check for error messages:
$ journalctl -fu etcd.service
Check for health status (in this case, a single node):
# etcdctl2 cluster-health member 5ee217d17301 is healthy: got healthy result from https://192.168.55.8:2379 cluster is healthy
Restart the etcd service in cluster mode:
# rm -f /etc/systemd/system/etcd.service.d/temp.conf # systemctl daemon-reload # systemctl restart etcd
Check for health status and member list
# etcdctl2 cluster-health member 5ee217d17301 is healthy: got healthy result from https://192.168.55.8:2379 cluster is healthy # etcdctl2 member list 5ee217d17301: name=master-0.example.com peerURLs=http://localhost:2380 clientURLs=https://192.168.55.8:2379 isLeader=true
Once the first instance is running, it is safe to restore multiple etcd servers as desired.
Fix the peerURLS
parameter
After restoring the data and creating a new cluster, the peerURLs
parameter
shows localhost
instead the IP where etcd is listening for peer
communication:
# etcdctl2 member list 5ee217d17301: name=master-0.example.com peerURLs=http://*localhost*:2380 clientURLs=https://192.168.55.8:2379 isLeader=true
Get the member ID from the etcdctl member list
output.
Get the IP where etcd is listening for peer communication:
$ ss -l4n | grep 2380
Update the member information with that IP:
# etcdctl2 member update *5ee217d17301* https://*192.168.55.8*:2380 Updated member with ID 5ee217d17301 in cluster
To verify, check that the IP is in the output of the following:
$ etcdctl2 member list 5ee217d17301: name=master-0.example.com peerURLs=https://*192.168.55.8*:2380 clientURLs=https://192.168.55.8:2379 isLeader=true
Add more members
In the instance joining the cluster:
Get the etcd name for the instance in the ETCD_NAME
variable:
# grep ETCD_NAME /etc/etcd/etcd.conf
Get the IP where etcd listens for peer communication:
# grep ETCD_INITIAL_ADVERTISE_PEER_URLS /etc/etcd/etcd.conf
Delete the previous etcd data:
# rm -Rf /var/lib/etcd/*
On the etcd host where etcd is properly running, add the new member:
$ etcdctl2 member add <name> <advertise_peer_urls>
The command outputs some variables. For example:
ETCD_NAME="master2" ETCD_INITIAL_CLUSTER="master1=https://10.0.0.7:2380,master2=https://10.0.0.5:2380" ETCD_INITIAL_CLUSTER_STATE="existing"
Add those values to the /etc/etcd/etcd.conf
file of the new host:
# vi /etc/etc/etcd.conf
Once those values are replaced, start the etcd service in the node joining the cluster:
# systemctl start etcd.service
Check for error messages:
$ journalctl -fu etcd.service
Repeat the above for every etcd node joining the cluster.
Verify the cluster status and cluster health once all the nodes joined:
# etcdctl2 member list 5cd050b4d701: name=master1 peerURLs=https://10.0.0.7:2380 clientURLs=https://10.0.0.7:2379 isLeader=true d0c57659d8990cbd: name=master2 peerURLs=https://10.0.0.5:2380 clientURLs=https://10.0.0.5:2379 isLeader=false e4696d637de3eb2d: name=master3 peerURLs=https://10.0.0.6:2380 clientURLs=https://10.0.0.6:2379 isLeader=false
# etcdctl2 cluster-health member 5cd050b4d701 is healthy: got healthy result from https://10.0.0.7:2379 member d0c57659d8990cbd is healthy: got healthy result from https://10.0.0.5:2379 member e4696d637de3eb2d is healthy: got healthy result from https://10.0.0.6:2379 cluster is healthy
The restore procedure for v3 data is similar to the v2 data.
Snapshot integrity may be optionally verified at restore time. If the snapshot
is taken with etcdctl snapshot save
, it will have an integrity hash that is
checked by etcdctl snapshot restore
. If the snapshot is copied from the data
directory, there is no integrity hash and it will only restore by using
--skip-hash-check
.
The procedure to restore only the v3 data must be performed on a single etcd host. You can then add the rest of the nodes to the cluster. |
Stop all etcd services:
# systemctl stop etcd.service
Clear all old data, because etcdctl
recreates it in the node where the
restore procedure is going to be performed:
# rm -Rf /var/lib/etcd
Use the snapshot restore
command with the data from /etc/etcd/etcd.conf
to
match the following command:
# etcdctl3 snapshot restore */backup/etcd-xxxxxx/backup.db* \ --data-dir /var/lib/etcd \ --name *master-0.example.com* \ --initial-cluster *"master-0.example.com=https://192.168.55.8:2380"* \ --initial-cluster-token *"etcd-cluster-1"* \ --initial-advertise-peer-urls *https://192.168.55.8:2380* 2017-10-03 08:55:32.440779 I | mvcc: restore compact to 1041269 2017-10-03 08:55:32.468244 I | etcdserver/membership: added member 40bef1f6c79b3163 [https://192.168.55.8:2380] to cluster 26841ebcf610583c
Restore permissions and selinux
context to the restored files:
# chown -R etcd.etcd /var/lib/etcd/ # restorecon -Rv /var/lib/etcd
Start the etcd service:
# systemctl start etcd
Check for any error messages:
$ journalctl -fu etcd.service
Adding more nodes
Once the first instance is running, it is safe to restore multiple etcd servers as desired.
Get the etcd name for the instance in the ETCD_NAME
variable:
# grep ETCD_NAME /etc/etcd/etcd.conf
Get the IP where etcd listens for peer communication:
# grep ETCD_INITIAL_ADVERTISE_PEER_URLS /etc/etcd/etcd.conf
On the etcd host where etcd is still running, add the new member:
# etcdctl3 member add *<name>* \ --peer-urls="*<advertise_peer_urls>*"
The command outputs some variables. For example:
ETCD_NAME="master2" ETCD_INITIAL_CLUSTER="master-0.example.com=https://192.168.55.8:2380" ETCD_INITIAL_CLUSTER_STATE="existing"
Add those values to the /etc/etcd/etcd.conf
file of the new host:
# vi /etc/etc/etcd.conf
In the recently added etcd node, clean the etcd data directories to ensure the proper backup is restored keeping the running copy:
# mv /var/lib/etcd /var/lib/etcd.old # mkdir /var/lib/etcd # chown -R etcd.etcd /var/lib/etcd/ # restorecon -Rv /var/lib/etcd/
or wipe the etcd data directory:
# rm -Rf /var/lib/etcd/*
Start the etcd service in the recently added etcd host:
# systemctl start etcd
Check for errors:
# journalctl -fu etcd.service
Repeat the previous steps for every etcd node that is required to be added.
Verify the cluster has been properly set:
# etcdctl3 endpoint health https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 1.423459ms https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.767481ms https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.599694ms # etcdctl3 endpoint status https://master-0.example.com:2379, 40bef1f6c79b3163, 3.2.5, 28 MB, true, 9, 2878 https://master-1.example.com:2379, 1ea57201a3ff620a, 3.2.5, 28 MB, false, 9, 2878 https://master-2.example.com:2379, 59229711e4bc65c8, 3.2.5, 28 MB, false, 9, 2878
Scaling the etcd cluster can be performed vertically by adding more resources to the etcd hosts, or horizontally by adding more etcd hosts.
If etcd is collocated on master instances, horizontally scaling etcd prevents the API and controller services competing with etcd for resources.
Due to the voting system etcd uses, the cluster must always contain an odd number of members. |
The new host requires a fresh RHEL7 dedicated host. The etcd storage should be
located on an SSD disk to achieve maximum performance and ideally on a dedicated
disk mounted in /var/lib/etcd
.
OpenShift Origin version 3.7 ships with an automated way to add a new etcd host using Ansible. |
Before adding a new etcd host, perform a backup of both etcd configuration and data to prevent data loss.
Check the current etcd cluster status to avoid adding new hosts to an unhealthy cluster:
# etcdctl --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --peers="https://*master-0.example.com*:2379,\ https://*master-1.example.com*:2379,\ https://*master-2.example.com*:2379"\ cluster-health member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379 member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379 member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379 cluster is healthy
Or, using etcd v3 API:
# ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \ --key=/etc/etcd/peer.key \ --cacert="/etc/etcd/ca.crt" \ --endpoints="https://*master-0.example.com*:2379,\ https://*master-1.example.com*:2379,\ https://*master-2.example.com*:2379" endpoint health https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
Before running the scaleup
playbook, ensure the new host is registered to
the proper Red Hat software channels:
# subscription-manager register \ --username=*<username>* --password=*<password>* # subscription-manager attach --pool=*<poolid>* # subscription-manager repos --disable="*" # subscription-manager repos \ --enable=rhel-7-server-rpms \ --enable=rhel-7-server-extras-rpms
etcd is hosted in the rhel-7-server-extras-rpms
software channel.
Modify the Ansible inventory file and create a new group named [new_etcd]
and add the new host. Then, add the new_etcd
group as a child of the [OSEv3]
group:
[OSEv3:children] masters nodes etcd <new_etcd> ... [OUTPUT ABBREVIATED] ... [etcd] master-0.example.com master-1.example.com master-2.example.com [new_etcd] etcd0.example.com
Run the etcd scaleup
playbook from the host that executed the initial
installation and where the Ansible inventory file is:
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-etcd/scaleup.yml
Once the above has finished, modify the inventory file to reflect the current
status by moving the new etcd host from the [new_etcd]
group to the [etcd]
group:
[OSEv3:children] masters nodes etcd <new_etcd> ... [OUTPUT ABBREVIATED] ... [etcd] master-0.example.com master-1.example.com master-2.example.com etcd0.example.com
If using Flannel, modify the flanneld
service configuration, located at
/etc/sysconfig/flanneld
on every OpenShift Origin host, to include the new etcd
host:
FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,*https://etcd0.example.com:2379*
Restart the flanneld
service:
# systemctl restart flanneld.service
The following steps can be performed on any etcd member. If using the Ansible
installer, the first host provided in the [etcd]
Ansible inventory is used to
generate the etcd configuration and certificates stored in
/etc/etcd/generated_certs
, so perform the next steps in that etcd host.
Steps to be performed on the current etcd cluster
In order to create the etcd certificates, run the openssl
command with the
proper values. To make this process easier, create some environment variables:
export NEW_ETCD_HOSTNAME="*etcd0.example.com*" export NEW_ETCD_IP="*192.168.55.21*" export CN=$NEW_ETCD_HOSTNAME export SAN="IP:${NEW_ETCD_IP}" export PREFIX="/etc/etcd/generated_certs/etcd-$CN/" export OPENSSLCFG="/etc/etcd/ca/openssl.cnf"
The custom |
Create the directory where the configuration and certificates are stored:
# mkdir -p ${PREFIX}
Create the server certificate request and sign it:
# openssl req -new -config ${OPENSSLCFG} \ -keyout ${PREFIX}server.key \ -out ${PREFIX}server.csr \ -reqexts etcd_v3_req -batch -nodes \ -subj /CN=$CN # openssl ca -name etcd_ca -config ${OPENSSLCFG} \ -out ${PREFIX}server.crt \ -in ${PREFIX}server.csr \ -extensions etcd_v3_ca_server -batch
Create the peer certificate request and sign it:
# openssl req -new -config ${OPENSSLCFG} \ -keyout ${PREFIX}peer.key \ -out ${PREFIX}peer.csr \ -reqexts etcd_v3_req -batch -nodes \ -subj /CN=$CN # openssl ca -name etcd_ca -config ${OPENSSLCFG} \ -out ${PREFIX}peer.crt \ -in ${PREFIX}peer.csr \ -extensions etcd_v3_ca_peer -batch
Copy the current etcd configuration and ca.crt
files from the current node
as examples to be modified later:
# cp /etc/etcd/etcd.conf ${PREFIX} # cp /etc/etcd/ca.crt ${PREFIX}
Add the new host to the etcd cluster. Note the new host is not configured yet
so the status stays as unstarted
until the new host is properly configured:
# etcdctl2 member add ${NEW_ETCD_HOSTNAME} https://${NEW_ETCD_IP}:2380
This command outputs the following variables:
ETCD_NAME="<NEW_ETCD_HOSTNAME>" ETCD_INITIAL_CLUSTER="<NEW_ETCD_HOSTNAME>=https://<NEW_HOST_IP>:2380,<CLUSTERMEMBER1_NAME>=https:/<CLUSTERMEMBER2_IP>:2380,<CLUSTERMEMBER2_NAME>=https:/<CLUSTERMEMBER2_IP>:2380,<CLUSTERMEMBER3_NAME>=https:/<CLUSTERMEMBER3_IP>:2380" ETCD_INITIAL_CLUSTER_STATE="existing"
Those values must be overwritten by the current ones in the sample
${PREFIX}/etcd.conf
file. Also, modify the following variables with the new
host IP (${NEW_ETCD_IP}
can be used) in that file:
ETCD_LISTEN_PEER_URLS ETCD_LISTEN_CLIENT_URLS ETCD_INITIAL_ADVERTISE_PEER_URLS ETCD_ADVERTISE_CLIENT_URLS
Modify the ${PREFIX}/etcd.conf
file and check for syntax errors or missing
IPs otherwise the etcd service could fail:
# vi ${PREFIX}/etcd.conf
Once the file has been properly modified, a tgz
file with the certificates,
the sample configuration file, and the ca
is created and copied to the new
host:
# tar -czvf /etc/etcd/generated_certs/${CN}.tgz -C ${PREFIX} . # scp /etc/etcd/generated_certs/${CN}.tgz ${CN}:/tmp/
Steps to be performed on the new etcd host
The new host is required to be subscribed to the proper Red Hat software channels as explained above in the prerequisites section.
Install iptables-services
to provide iptables utilities to open the required
ports for etcd:
# yum install -y iptables-services
Create firewall rules to allow etcd to communicate:
Port 2379/tcp for clients
Port 2380/tcp for peer communication
# systemctl enable iptables.service --now # iptables -N OS_FIREWALL_ALLOW # iptables -t filter -I INPUT -j OS_FIREWALL_ALLOW # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT # iptables-save | tee /etc/sysconfig/iptables
In this example, a new chain |
If the environment is hosted in an IaaS environment, modify the security groups for the instance to allow incoming traffic to those ports as well. |
Install etcd software:
# yum install -y etcd
Ensure the service is not running:
# systemctl disable etcd --now
Remove any etcd configuration and data:
# rm -Rf /etc/etcd/* # rm -Rf /var/lib/etcd/*
Untar the certificates and configuration files
# tar xzvf /tmp/*etcd0.example.com*.tgz -C /etc/etcd/
Restore etcd configuration and data owner:
# chown -R etcd.etcd /etc/etcd/ # chown -R etcd.etcd /var/lib/etcd/
Start etcd on the new host:
# systemctl enable etcd --now
Verify the host has been added to the cluster and the current cluster health:
# etcdctl --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --peers="https://*master-0.example.com*:2379,\ https://*master-1.example.com*:2379,\ https://*master-2.example.com*:2379,\ https://*etcd0.example.com*:2379"\ cluster-health member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379 member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379 member 8b8904727bf526a5 is healthy: got healthy result from https://192.168.55.21:2379 member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379 cluster is healthy
Or, using etcd v3 API:
# ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \ --key=/etc/etcd/peer.key \ --cacert="/etc/etcd/ca.crt" \ --endpoints="https://*master-0.example.com*:2379,\ https://*master-1.example.com*:2379,\ https://*master-2.example.com*:2379,\ https://*etcd0.example.com*:2379"\ endpoint health https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms https://etcd0.example.com:2379 is healthy: successfully committed proposal: took = 1.498829ms
Steps to be performed on all OpenShift Origin masters
Modify the master configuration to add the new etcd host to the list of the
etcd servers OpenShift Origin uses to store the data, located in the
etcClientInfo
section of the /etc/origin/master/master-config.yaml
file on
every master:
etcdClientInfo: ca: master.etcd-ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: - https://master-0.example.com:2379 - https://master-1.example.com:2379 - https://master-2.example.com:2379 *- https://etcd0.example.com:2379*
Restart the master API service on every master:
# systemctl restart atomic-openshift-master-api
Or, on a single master cluster installation
# systemctl restart atomic-openshift-master
The number of etcd nodes must be odd, so at least two hosts must be added. |
If using Flannel, the flanneld
service configuration located at
/etc/sysconfig/flanneld
on every OpenShift Origin host must be modified to
include the new etcd host:
FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,*https://etcd0.example.com:2379*
Restart the flanneld
service:
# systemctl restart flanneld.service
An etcd host can fail beyond restoration. This section walks through removing the failed etcd host from the cluster.
Ensure the etcd cluster maintains quorum while removing the etcd host, by removing a single host at a time from a cluster. |
Steps to be performed on all masters hosts
Edit the failed etcd host out of the /etc/origin/master/master-config.yaml
master configuration file on every master:
etcdClientInfo: ca: master.etcd-ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: - https://master-0.example.com:2379 - https://master-1.example.com:2379 *- https://master-2.example.com:2379* (1)
1 | The host to be removed. |
Restart the master API service on every master:
# systemctl restart atomic-openshift-master-api
Or, if using a single master cluster installation:
# systemctl restart atomic-openshift-master
Steps to be performed in the current etcd cluster
Remove the failed host from the cluster by running the following on a functioning etcd host:
# etcdctl2 cluster-health member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379 member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379 failed to check the health of member 8372784203e11288 on https://192.168.55.21:2379: Get https://192.168.55.21:2379/health: dial tcp 192.168.55.21:2379: getsockopt: connection refused member 8372784203e11288 is unreachable: [https://192.168.55.21:2379] are all unreachable member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379 cluster is healthy # etcdctl2 member remove 8372784203e11288 Removed member 8372784203e11288 from cluster # etcdctl2 cluster-health member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379 member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379 member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379 cluster is healthy
The |
To ensure the etcd configuration does not use the failed host when the etcd
service is restarted, modify the /etc/etcd/etcd.conf
file on all remaining
etcd hosts and remove the failed host in the value for the
ETCD_INITIAL_CLUSTER
variable:
# vi /etc/etcd/etcd.conf
For example:
ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380,master-2.example.com=https://192.168.55.13:2380
becomes:
ETCD_INITIAL_CLUSTER=master-0.example.com=https://192.168.55.8:2380,master-1.example.com=https://192.168.55.12:2380
Restarting the etcd services is not required, because the failed host has been
removed using |
Modify the Ansible inventory file to reflect the current status of the cluster and to avoid issues if running a playbook:
[OSEv3:children] masters nodes etcd ... [OUTPUT ABBREVIATED] ... [etcd] master-0.example.com master-1.example.com
If using Flannel, modify the flanneld
service configuration located at
/etc/sysconfig/flanneld
on every host and remove the etcd host:
FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379
Restart the flanneld
service:
# systemctl restart flanneld.service
To replace an etcd host, first remove the etcd node from the cluster following the steps from Removing an etcd host, then scale up the etcd cluster with the new host using the scale up Ansible playbook or the manual procedure in Scaling etcd.
The etcd cluster should maintain a quorum during the replacement operation. This means that at least one host should be in operation at all times. If the host replacement operation occurs while the etcd cluster maintains a quorum, cluster operations are not affected, except if there is a large etcd data to replicate where some operations can be slowed down. |
Ensure a backup of etcd data and configuration files exists before any procedure involving the etcd cluster to ensure restoration in the case of failure. |