Atom feed of this document
 

 Xen, XenAPI, XenServer and XCP

The recommended way to use Xen with OpenStack is through the XenAPI driver. To enable the XenAPI driver, add the following configuration options /etc/nova/nova.conf and restart the nova-compute service:

compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=http://your_xenapi_management_ip_address
xenapi_connection_username=root
xenapi_connection_password=your_password

The above connection details are used by the OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer management console, to your XenServer or XCP box. Note these settings are generally unique to each hypervisor host as the use of the host internal management network IP address (169.254.0.1) will cause features such as live-migration to break.

OpenStack with XenAPI supports the following virtual machine image formats:

  • Raw

  • VHD (in a gzipped tarball)

It is possible to manage Xen using libvirt. This would be necessary for any Xen-based system that isn't using the XCP toolstack, such as SUSE Linux or Oracle Linux. Unfortunately, this is not well-tested or supported. To experiment using Xen through libvirt add the following configuration options /etc/nova/nova.conf:

compute_driver=libvirt.LibvirtDriver
libvirt_type=xen

The rest of this section describes Xen, XCP, and XenServer, the differences between them, and how to use them with OpenStack. Xen's architecture is different from KVM's in important ways, and we discuss those differences and when each might make sense in your OpenStack cloud.

 Xen terminology

Xen is a hypervisor. It provides the fundamental isolation between virtual machines. Xen is open source (GPLv2) and is managed by Xen.org, an cross-industry organization.

Xen is a component of many different products and projects. The hypervisor itself is very similar across all these projects, but the way that it is managed can be different, which can cause confusion if you're not clear which tool stack you are using. Make sure you know what tool stack you want before you get started.

Xen Cloud Platform (XCP) is an open source (GPLv2) tool stack for Xen. It is designed specifically as platform for enterprise and cloud computing, and is well integrated with OpenStack. XCP is available both as a binary distribution, installed from an iso, and from Linux distributions, such as xcp-xapi in Ubuntu. The current versions of XCP available in Linux distributions do not yet include all the features available in the binary distribution of XCP.

Citrix XenServer is a commercial product. It is based on XCP, and exposes the same tool stack and management API. As an analogy, think of XenServer being based on XCP in the way that Red Hat Enterprise Linux is based on Fedora. XenServer has a free version (which is very similar to XCP) and paid-for versions with additional features enabled. Citrix provides support for XenServer, but as of July 2012, they do not provide any support for XCP. For a comparison between these products see the XCP Feature Matrix.

Both XenServer and XCP include Xen, Linux, and the primary control daemon known as xapi.

The API shared between XCP and XenServer is called XenAPI. OpenStack usually refers to XenAPI, to indicate that the integration works equally well on XCP and XenServer. Sometimes, a careless person will refer to XenServer specifically, but you can be reasonably confident that anything that works on XenServer will also work on the latest version of XCP. Read the XenAPI Object Model Overview for definitions of XenAPI specific terms such as SR, VDI, VIF and PIF.

 Privileged and unprivileged domains

A Xen host will run a number of virtual machines, VMs, or domains (the terms are synonymous on Xen). One of these is in charge of running the rest of the system, and is known as "domain 0", or "dom0". It is the first domain to boot after Xen, and owns the storage and networking hardware, the device drivers, and the primary control software. Any other VM is unprivileged, and are known as a "domU" or "guest". All customer VMs are unprivileged of course, but you should note that on Xen the OpenStack control software (nova-compute) also runs in a domU. This gives a level of security isolation between the privileged system software and the OpenStack software (much of which is customer-facing). This architecture is described in more detail later.

There is an ongoing project to split domain 0 into multiple privileged domains known as driver domains and stub domains. This would give even better separation between critical components. This technology is what powers Citrix XenClient RT, and is likely to be added into XCP in the next few years. However, the current architecture just has three levels of separation: dom0, the OpenStack domU, and the completely unprivileged customer VMs.

 Paravirtualized versus hardware virtualized domains

A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). This refers to the interaction between Xen, domain 0, and the guest VM's kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and domain 0; this gives them better performance characteristics. HVM guests are not aware of their environment, and the hardware has to pretend that they are running on an unvirtualized machine. HVM guests have the advantage that there is no need to modify the guest operating system, which is essential when running Windows.

In OpenStack, customer VMs may run in either PV or HVM mode. However, the OpenStack domU (that's the one running nova-compute) must be running in PV mode.

 XenAPI deployment architecture

When you deploy OpenStack on XCP or XenServer you will get something similar to this:

Key things to note:

  • The hypervisor: Xen

  • Domain 0: runs xapi and some small pieces from OpenStack (some xapi plugins and network isolation rules). The majority of this is provided by XenServer or XCP (or yourself using Kronos).

  • OpenStack VM: The nova-compute code runs in a paravirtualized virtual machine, running on the host under management. Each host runs a local instance of nova-compute. It will often also be running nova-network (depending on your network mode). In this case, nova-network is managing the addresses given to the tenant VMs through DHCP.

  • Nova uses the XenAPI Python library to talk to xapi, and it uses the Management Network to reach from the domU to dom0 without leaving the host.

Some notes on the networking:

  • The above diagram assumes FlatDHCP networking (the DevStack default).

  • There are three main OpenStack Networks:

    • Management network - RabbitMQ, MySQL, etc. Please note, that the VM images are downloaded by the xenapi plugins, so please make sure, that the images could be downloaded through the management network. It usually means, binding those services to the management interface.

    • Tenant network - controlled by nova-network. The parameters of this network depends on the networking model selected (Flat, Flat DHCP, VLAN)

    • Public network - floating IPs, public API endpoints.

  • The networks shown here need to be connected to the corresponding physical networks within the datacenter. In the simplest case, three individual physical network cards could be used. It is also possible to use VLANs to separate these networks. Please note, that the selected configuration must be in line with the networking model selected for the cloud. (in case of VLAN networking, the physical channels have to be able to forward the tagged traffic)

 XenAPI pools

The host-aggregates feature allows you to create pools of XenServer hosts (configuring shared storage is still an out of band activity), to enable live migration when using shared storage.

 Installing XenServer and XCP

When you want to run OpenStack with XCP or XenServer, you first need to install the software on an appropriate server. Please note, Xen is a type 1 hypervisor. This means when your server starts the first software that runs is Xen. This means the software you install on your compute host is XenServer or XCP, not the operating system you wish to run the OpenStack code on. The OpenStack services will run in a VM you install on top of XenServer.

Before you can install your system you must decide if you want to install Citrix XenServer (either the free edition, or one of the paid editions) or Xen Cloud Platform from Xen.org. You can download the software from the following locations:

When installing many servers, you may find it easier to perform PXE boot installations of XenServer or XCP. You can also package up any post install changes you wish to make to your XenServer by creating your own XenServer supplemental pack.

It is also possible to get XCP by installing the xcp-xenapi package on Debian based distributions. However, this is not as mature or feature complete as above distributions. This will modify your boot loader to first boot Xen, then boot your existing OS on top of Xen as Dom0. It is in Dom0 that the xapi daemon will run. You can find more details on the Xen.org wiki: http://wiki.xen.org/wiki/Project_Kronos

[Important]Important

Ensure you are using the EXT type of storage repository (SR). Features that require access to VHD files (such as copy on write, snapshot and migration) do not work when using the LVM SR. Storage repository (SR) is a XenAPI specific term relating to the physical storage on which virtual disks are stored.

On the XenServer/XCP installation screen, this is selected by choosing "XenDesktop Optimized" option. In case you are using an answer file, make sure you use srtype="ext" within the installation tag of the answer file.

 Post install steps

You are now ready to install OpenStack onto your XenServer system. This process involves the following steps:

  • For resize and migrate functionality, please perform the changes described in the Configuring Resize section of the OpenStack Compute Administration Manual.

  • Install the VIF isolation rules to help prevent mac and ip address spoofing.

  • Install the XenAPI plugins - see the next section.

  • In order to support AMI type images, you need to set up /boot/guest symlink/directory in Dom0. For detailed instructions, see next section.

  • To support resize/migration, set up an ssh trust relation between your XenServer hosts, and ensure /images is properly set up. See next section for more details.

  • Create a Paravirtualised virtual machine that can run the OpenStack compute code.

  • Install and configure the nova-compute in the above virtual machine.

For further information on these steps look at how DevStack performs the last three steps when doing developer deployments. For more information on DevStack, take a look at the DevStack and XenServer Readme. More information on the first step can be found in the XenServer mutli-tenancy protection doc. More information on how to install the XenAPI plugins can be found in the XenAPI plugins Readme.

 Installing the XenAPI Plugins

When using Xen as the hypervisor for OpenStack Compute, you can install a Python script (usually, but it can be any executable) on the host side, and then call that through the XenAPI. These scripts are called plugins. The XenAPI plugins live in the nova code repository. These plugins have to be copied to the hypervisor's Dom0, to the appropriate directory, where xapi can find them. There are several options for the installation. The important thing is to ensure that the version of the plugins are in line with the nova installation by only installing plugins from a matching nova repository.

Manual Installation:

  • Create temporary files/directories:

    $ NOVA_ZIPBALL=$(mktemp)
    $ NOVA_SOURCES=$(mktemp -d)
    

  • Get the source from github. The example assumes the master branch is used, please amend the URL to match the version being used:

    $ wget -qO "$NOVA_ZIPBALL" https://github.com/openstack/nova/archive/master.zip
    $ unzip "$NOVA_ZIPBALL" -d "$NOVA_SOURCES"
    

    (Alternatively) Should you wish to use the official Ubuntu packages, use the following commands to get the nova codebase:

    $ ( cd $NOVA_SOURCES && apt-get source python-nova --download-only )
    $ ( cd $NOVA_SOURCES && for ARCHIVE in *.tar.gz; do tar -xzf $ARCHIVE; done )
    

  • Copy the plugins to the hypervisor:

    $ PLUGINPATH=$(find $NOVA_SOURCES -path '*/xapi.d/plugins' -type d -print)
    $ tar -czf - -C "$PLUGINPATH" ./ | ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins/
    

  • Remove the temporary files/directories:

    $ rm "$NOVA_ZIPBALL"
    $ rm -rf "$NOVA_SOURCES"
    

Packaged Installation:

Follow these steps to produce a supplemental pack from the nova sources, and package it as a XenServer Supplemental Pack.

  • Create RPM packages. Given you have the nova sources (use one of the methods mentioned at Manual Installation):

    $ cd nova/plugins/xenserver/xenapi/contrib
    $ ./build-rpm.sh
    

    The above commands should leave an .rpm file in the rpmbuild/RPMS/noarch/ directory.

  • Pack the RPM packages to a Supplemental Pack, using the XenServer DDK (the following command should be issued on the XenServer DDK virtual appliance, after the produced rpm file has been copied over):

    $ /usr/bin/build-supplemental-pack.sh \
    > --output=output_directory \
    > --vendor-code=novaplugin \
    > --vendor-name=openstack \
    > --label=novaplugins \
    > --text="nova plugins" \
    > --version=0 \
    > full_path_to_rpmfile
    

    The above command should produce an .iso file in the output directory specified. Copy that file to the hypervisor.

  • Install the Supplemental Pack. Log in to the hypervisor, and issue:

    # xe-install-supplemental-pack path_to_isofile
    

 Prepare for AMI Type Images

In order to support AMI type images within your OpenStack installation, a directory /boot/guest needs to be created inside Dom0. The OpenStack VM will put the kernel and ramdisk extracted from the AKI and ARI images to this location.

This directory's content will be maintained by OpenStack, and its size should not increase during normal operation, however in case of power failures or accidental shutdowns, some files might be left over. In order to prevent these files to fill up Dom0's disk, it is recommended to set up this directory as a symlink pointing to a subdirectory of the local SR.

Execute the following commands in Dom0 to achieve the above mentioned setup:

# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
# LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels"
# mkdir -p "$LOCALPATH"
# ln -s "$LOCALPATH" /boot/guest

 Dom0 Modifications for Resize/Migration Support

To get resize to work with XenServer (and XCP) you need to:

  • Establish a root trust between all hypervisor nodes of your deployment:

    You can do so by generating an ssh key-pair (with ssh-keygen) and then ensuring that each of your dom0's authorized_keys file (located in /root/.ssh/authorized_keys) contains the public key fingerprint (located in /root/.ssh/id_rsa.pub).

  • Provide an /images mount point to your hypervisor's dom0:

    Dom0 space is a premium so creating a directory in dom0 is kind of dangerous, and almost surely bound to fail especially when resizing big servers. The least you can do is to symlink /images to your local storage SR. The instructions below work for an English-based installation of XenServer (and XCP) and in the case of ext3 based SR (with which the resize functionality is known to work correctly).

    # LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
    # IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images"
    # mkdir -p "$IMG_DIR"
    # ln -s "$IMG_DIR" /images
    

 Xen Boot from ISO

XenServer, through the XenAPI integration with OpenStack provides a feature to boot instances from an ISO file. In order to activate the "Boot From ISO" feature, the SR elements on XenServer host must be configured that way.

First, create an ISO-typed SR, such as an NFS ISO library, for instance. For this, using XenCenter is a simple method. You need to export an NFS volume from a remote NFS server. Make sure it is exported in read-write mode.

Second, on the compute host, find the uuid of this ISO SR and write it down.

# xe host-list

Next, locate the uuid of the NFS ISO library:

# xe sr-list content-type=iso

Set the uuid and configuration. Even if an NFS mount point isn't local storage, you must specify "local-storage-iso".

# xe sr-param-set uuid=[iso sr uuid] other-config:i18n-key=local-storage-iso

Now, make sure the host-uuid from "xe pbd-list" equals the uuid of the host you found earlier

# xe sr-uuid=[iso sr uuid]

You should now be able to add images via the OpenStack Image Registry, with disk-format=iso, and boot them in OpenStack Compute.

glance image-create --name=fedora_iso --disk-format=iso --container-format=bare < Fedora-16-x86_64-netinst.iso

 Further reading

Here are some of the resources available to learn more about Xen:

 Xen Configuration Reference

The following table provides a complete reference of all configuration options available for configuring Xen with OpenStack

Table 9.2. Description of configuration options for xen
Configuration option=Default value (Type) Description
agent_resetnetwork_timeout=60 (IntOpt)number of seconds to wait for agent reply to resetnetwork request
agent_timeout=30 (IntOpt)number of seconds to wait for agent reply
agent_version_timeout=300 (IntOpt)number of seconds to wait for agent to be fully operational
cache_images=all (StrOpt)Cache glance images locally. `all` will cache all images, `some` will only cache images that have the image_property `cache_in_nova=True`, and `none` turns off caching entirely
console_driver=nova.console.xvp.XVPConsoleProxy (StrOpt)Driver to use for the console proxy
console_vmrc_error_retries=10 (IntOpt)number of retries for retrieving VMRC information
console_vmrc_port=443 (IntOpt)port for VMware VMRC connections
console_xvp_conf=/etc/xvp.conf (StrOpt)generated XVP conf file
console_xvp_conf_template=$pybasedir/nova/console/xvp.conf.template (StrOpt)XVP conf template
console_xvp_log=/var/log/xvp.log (StrOpt)XVP log file
console_xvp_multiplex_port=5900 (IntOpt)port for XVP to multiplex VNC connections on
console_xvp_pid=/var/run/xvp.pid (StrOpt)XVP master process pid file
default_os_type=linux (StrOpt)Default OS type
iqn_prefix=iqn.2010-10.org.openstack (StrOpt)IQN Prefix
max_kernel_ramdisk_size=16777216 (IntOpt)Maximum size in bytes of kernel or ramdisk images
sr_matching_filter=other-config:i18n-key=local-storage (StrOpt)Filter for finding the SR to be used to install guest instances on. The default value is the Local Storage in default XenServer/XCP installations. To select an SR with a different matching criteria, you could set it to other-config:my_favorite_sr=true. On the other hand, to fall back on the Default SR, as displayed by XenCenter, set this flag to: default-sr:true
stub_compute=False (BoolOpt)Stub calls to compute worker for tests
target_host=None (StrOpt)iSCSI Target Host
target_port=3260 (StrOpt)iSCSI Target Port, 3260 Default
use_join_force=True (BoolOpt)To use for hosts with different CPUs
xen_hvmloader_path=/usr/lib/xen/boot/hvmloader (StrOpt)Location where the Xen hvmloader is kept
xenapi_agent_path=usr/sbin/xe-update-networking (StrOpt)Specifies the path in which the xenapi guest agent should be located. If the agent is present, network configuration is not injected into the image. Used if compute_driver=xenapi.XenAPIDriver and flat_injected=True
xenapi_check_host=True (BoolOpt)Ensure compute service is running on host XenAPI connects to.
xenapi_connection_concurrent=5 (IntOpt)Maximum number of concurrent XenAPI connections. Used only if compute_driver=xenapi.XenAPIDriver
xenapi_connection_password=None (StrOpt)Password for connection to XenServer/Xen Cloud Platform. Used only if compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=None (StrOpt)URL for connection to XenServer/Xen Cloud Platform. Required if compute_driver=xenapi.XenAPIDriver
xenapi_connection_username=root (StrOpt)Username for connection to XenServer/Xen Cloud Platform. Used only if compute_driver=xenapi.XenAPIDriver
xenapi_disable_agent=False (BoolOpt)Disable XenAPI agent. Reduces the amount of time it takes nova to detect that a VM has started, when that VM does not have the agent installed
xenapi_image_upload_handler=nova.virt.xenapi.imageupload.glance.GlanceStore (StrOpt)Object Store Driver used to handle image uploads.
xenapi_login_timeout=10 (IntOpt)Timeout in seconds for XenAPI login.
xenapi_num_vbd_unplug_retries=10 (IntOpt)Maximum number of retries to unplug VBD
xenapi_ovs_integration_bridge=xapi1 (StrOpt)Name of Integration Bridge used by Open vSwitch
xenapi_remap_vbd_dev=False (BoolOpt)Used to enable the remapping of VBD dev (Works around an issue in Ubuntu Maverick)
xenapi_remap_vbd_dev_prefix=sd (StrOpt)Specify prefix to remap VBD dev to (ex. /dev/xvdb -> /dev/sdb)
xenapi_running_timeout=60 (IntOpt)number of seconds to wait for instance to go to running state
xenapi_sparse_copy=True (BoolOpt)Whether to use sparse_copy for copying data on a resize down (False will use standard dd). This speeds up resizes down considerably since large runs of zeros won't have to be rsynced
xenapi_sr_base_path=/var/run/sr-mount (StrOpt)Base path to the storage repository
xenapi_torrent_base_url=None (StrOpt)Base URL for torrent files.
xenapi_torrent_download_stall_cutoff=600 (IntOpt)Number of seconds a download can remain at the same progress percentage w/o being considered a stall
xenapi_torrent_images=none (StrOpt)Whether or not to download images via Bit Torrent (all|some|none).
xenapi_torrent_listen_port_end=6891 (IntOpt)End of port range to listen on
xenapi_torrent_listen_port_start=6881 (IntOpt)Beginning of port range to listen on
xenapi_torrent_max_last_accessed=86400 (IntOpt)Cached torrent files not accessed within this number of seconds can be reaped
xenapi_torrent_max_seeder_processes_per_host=1 (IntOpt)Maximum number of seeder processes to run concurrently within a given dom0. (-1 = no limit)
xenapi_torrent_seed_chance=1.0 (FloatOpt)Probability that peer will become a seeder. (1.0 = 100%)
xenapi_torrent_seed_duration=3600 (IntOpt)Number of seconds after downloading an image via BitTorrent that it should be seeded for other peers.
xenapi_vhd_coalesce_max_attempts=5 (IntOpt)Max number of times to poll for VHD to coalesce. Used only if compute_driver=xenapi.XenAPIDriver
xenapi_vhd_coalesce_poll_interval=5.0 (FloatOpt)The interval used for polling of coalescing vhds. Used only if compute_driver=xenapi.XenAPIDriver
xenapi_vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver (StrOpt)The XenAPI VIF driver using XenServer Network APIs.

Log a bug against this page


loading table of contents...