Note | |
---|---|
Only cloud administrators can perform live migrations. If your cloud is configured to use cells, you can perform live migration within but not between cells. |
Migration enables an administrator to move a virtual-machine instance from one compute host to another. This feature is useful when a compute host requires maintenance. Migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine.
The migration types are:
Migration (or non-live migration). The instance is shut down (and the instance knows that it was rebooted) for a period of time to be moved to another hypervisor.
Live migration (or true live migration). Almost no instance downtime. Useful when the instances must be kept running during the migration. The types of live migration are:
Shared storage-based live migration. Both hypervisors have access to shared storage.
Block live migration. No shared storage is required. Incompatible with read-only devices such as CD-ROMs and Configuration Drive (config_drive).
Volume-backed live migration. When instances are backed by volumes rather than ephemeral disk, no shared storage is required, and migration is supported (currently only in libvirt-based hypervisors).
The following sections describe how to configure your hosts and compute nodes for migrations by using the KVM and XenServer hypervisors.
Prerequisites
Hypervisor: KVM with libvirt
Shared storage:
(for example,NOVA-INST-DIR
/instances//var/lib/nova/instances
) has to be mounted by shared storage. This guide uses NFS but other options, including the OpenStack Gluster Connector are available.Instances: Instance can be migrated with iSCSI based volumes
Note | |
---|---|
|
Prepare at least three servers; for example,
HostA
,HostB
, andHostC
:HostA
is the Cloud Controller, and should run these services:nova-api
,nova-scheduler
,nova-network
,cinder-volume
, andnova-objectstore
.HostB
andHostC
are the compute nodes that runnova-compute
.
Ensure that
(set withNOVA-INST-DIR
state_path
in thenova.conf
file) is the same on all hosts.In this example,
HostA
is the NFSv4 server that exports
, andNOVA-INST-DIR
/instancesHostB
andHostC
mount it.
Procedure 4.4. To configure your system
Configure your DNS or
/etc/hosts
and ensure it is consistent across all hosts. Make sure that the three hosts can perform name resolution with each other. As a test, use the ping command to ping each host from one another.$ ping HostA $ ping HostB $ ping HostC
Ensure that the UID and GID of your Compute and libvirt users are identical between each of your servers. This ensures that the permissions on the NFS mount works correctly.
Export
fromNOVA-INST-DIR
/instancesHostA
, and have it readable and writable by the Compute user onHostB
andHostC
.For more information, see: SettingUpNFSHowTo or CentOS / Redhat: Setup NFS v4.0 File Server
Configure the NFS server at
HostA
by adding the following line to the/etc/exports
file:NOVA-INST-DIR
/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)Change the subnet mask (
255.255.0.0
) to the appropriate value to include the IP addresses ofHostB
andHostC
. Then restart the NFS server:# /etc/init.d/nfs-kernel-server restart # /etc/init.d/idmapd restart
Set the 'execute/search' bit on your shared directory.
On both compute nodes, make sure to enable the 'execute/search' bit to allow qemu to be able to use the images within the directories. On all hosts, run the following command:
$ chmod o+x
NOVA-INST-DIR
/instancesConfigure NFS at HostB and HostC by adding the following line to the
/etc/fstab
file:HostA:/ /
NOVA-INST-DIR
/instances nfs4 defaults 0 0Ensure that you can mount the exported directory can be mounted:
$ mount -a -v
Check that HostA can see the "
" directory:NOVA-INST-DIR
/instances/$ ls -ld
NOVA-INST-DIR
/instances/drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/
Perform the same check at HostB and HostC, paying special attention to the permissions (Compute should be able to write):
$ ls -ld
NOVA-INST-DIR
/instances/drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/
$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 921514972 4180880 870523828 1% / none 16498340 1228 16497112 1% /dev none 16502856 0 16502856 0% /dev/shm none 16502856 368 16502488 1% /var/run none 16502856 0 16502856 0% /var/lock none 16502856 0 16502856 0% /lib/init/rw HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)
Update the libvirt configurations so that the calls can be made securely. These methods enable remote access over TCP and are not documented here, please consult your network administrator for assistance in deciding how to configure access.
SSH tunnel to libvirtd's UNIX socket
libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption
libvirtd TCP socket, with TLS for encryption and x509 client certs for authentication
libvirtd TCP socket, with TLS for encryption and Kerberos for authentication
Restart libvirt. After you run the command, ensure that libvirt is successfully restarted:
# stop libvirt-bin && start libvirt-bin $ ps -ef | grep libvirt
root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
Configure your firewall to allow libvirt to communicate between nodes.
By default, libvirt listens on TCP port 16509, and an ephemeral TCP range from 49152 to 49261 is used for the KVM communications. Based on the secure remote access TCP configuration you chose, be careful choosing what ports you open and understand who has access. For information about ports that are used with libvirt, see the libvirt documentation.
You can now configure options for live migration. In most cases, you do not need to configure any options. The following chart is for advanced usage only.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
live_migration_retry_count = 30 | (IntOpt) Number of 1 second retries needed in live_migration |
[libvirt] | |
live_migration_bandwidth = 0 | (IntOpt) Maximum bandwidth to be used during migration, in Mbps |
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER | (StrOpt) Migration flags to be set for live migration |
live_migration_uri = qemu+tcp://%s/system | (StrOpt) Migration target URI (any included "%s" is replaced with the migration target hostname) |
By default, the Compute service does not use the libvirt
live migration functionality. To enable this functionality,
add the following line to the nova.conf
file:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
The Compute service does not use libvirt's live migration by default because there is a risk that the migration process never ends. This can happen if the guest operating system dirties blocks on the disk faster than they can migrated.
Prerequisites
Compatible XenServer hypervisors. For more information, see the Requirements for Creating Resource Pools section of the XenServer Administrator's Guide.
Shared storage. An NFS export, visible to all XenServer hosts.
Note For the supported NFS versions, see the NFS VHD section of the XenServer Administrator's Guide.
To use shared storage live migration with XenServer hypervisors, the hosts must be joined to a XenServer pool. To create that pool, a host aggregate must be created with special metadata. This metadata is used by the XAPI plug-ins to establish the pool.
Procedure 4.5. To use shared storage live migration with XenServer hypervisors
Add an NFS VHD storage to your master XenServer, and set it as default SR. For more information, please refer to the NFS VHD section in the XenServer Administrator's Guide.
Configure all the compute nodes to use the default sr for pool operations. Add this line to your
nova.conf
configuration files across your compute nodes:sr_matching_filter=default-sr:true
Create a host aggregate:
$ nova aggregate-create <name-for-pool> <availability-zone>
The command displays a table that contains the ID of the newly created aggregate.
Now add special metadata to the aggregate, to mark it as a hypervisor pool:
$ nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true
$ nova aggregate-set-metadata <aggregate-id> operational_state=created
Make the first compute node part of that aggregate:
$ nova aggregate-add-host <aggregate-id> <name-of-master-compute>
At this point, the host is part of a XenServer pool.
Add additional hosts to the pool:
$ nova aggregate-add-host <aggregate-id> <compute-host-name>
Note At this point, the added compute node and the host are shut down, to join the host to the XenServer pool. The operation fails, if any server other than the compute node is running/suspended on your host.
Prerequisites
Compatible XenServer hypervisors. The hypervisors must support the Storage XenMotion feature. See your XenServer manual to make sure your edition has this feature.
Note | |
---|---|
|