Chapter 3. OpenStack Linux image requirements

For a Linux-based image to have full functionality in an OpenStack Compute cloud, there are a few requirements. For some of these, the requirement can be fulfilled by installing the cloud-init package. You should read this section before creating your own image to be sure that the image supports the OpenStack features you plan on using.

  • Disk partitions and resize root partition on boot (cloud-init)

  • No hard-coded MAC address information

  • SSH server running

  • Disable firewall

  • Access instance using ssh public key (cloud-init)

  • Process user data and other metadata (cloud-init)

  • Paravirtualized Xen support in Linux kernel (Xen hypervisor only with Linux kernel version < 3.0)

 Disk partitions and resize root partition on boot (cloud-init)

When you create a new Linux image, the first decision you will need to make is how to partition the disks. The choice of partition method can affect the resizing functionality, as described below.

The size of the disk in a virtual machine image is determined when you initially create the image. However, OpenStack lets you launch instances with different size drives by specifying different flavors. For example, if your image was created with a 5 GB disk, and you launch an instance with a flavor of m1.small, the resulting virtual machine instance will have (by default) a primary disk of 10GB. When an instance's disk is resized up, zeros are just added to the end.

Your image needs to be able to resize its partitions on boot to match the size requested by the user. Otherwise, after the instance boots, you will need to manually resize the partitions if you want to access the additional storage you have access to when the disk size associated with the flavor exceeds the disk size your image was created with.

 Xen: 1 ext3/ext4 partition (no LVM, no /boot, no swap)

If you are using the OpenStack XenAPI driver, the Compute service will automatically adjust the partition and filesystem for your instance on boot. Automatic resize will occur if the following are all true:

  • auto_disk_config=True in nova.conf.

  • The disk on the image has only one partition.

  • The file system on the one partition is ext3 or ext4.

Therefore, if you are using Xen, we recommend that when you create your images, you create a single ext3 or ext4 partition (not managed by LVM). Otherwise, read on.

 Non-Xen with cloud-init/cloud-tools: 1 ext3/ext4 partition (no LVM, no /boot, no swap)

Your image must be configured to deal with two issues:

  • The image's partition table describes the original size of the image

  • The image's filesystem fills the original size of the image

Then, during the boot process:

  • the partition table must be modified to be made aware of the additional space

    • If you are not using LVM, you must modify the table to extend the existing root partition to encompass this additional space

    • If you are using LVM, you can create add a new LVM entry to the partition table, create a new LVM physical volume, add it to the volume group, and extend the logical partition with the root volume

  • the root volume filesystem must be resized

The simplest way to support this in your image is to install the cloud-utilspackage (contains the growpart tool for extending partitions), the cloud-initramfs-tools package (which will support resizing root partition on the first boot), and the cloud-init package into your image. With these installed, the image will perform the root partition resize on boot (e.g., in /etc/rc.local). These packages are in the Ubuntu nd Debian package repository, as well as the EPEL repository (for Fedora/RHEL/CentOS/Scientific Linux guests).

If you are not able to install cloud-initramfs-tools, Robert Plestenjak has a github project called centos-image-resize that contains scripts that will update a ramdisk using growpart so that the image will resize properly on boot.

If you are able to install the cloud-utils and cloud-init packages, we recommend that when you create your images, you create a single ext3 or ext4 partition (not managed by LVM).

 Non-Xen without cloud-init/cloud-tools: LVM

If you cannot install cloud-init and cloud-tools inside of your guest, and you want to support resize, you will need to write a script that your image will run on boot to modify the partition table. In this case, we recommend using LVM to manage your partitions. Due to a limitation in the Linux kernel (as of this writing), you cannot modify a partition table of a raw disk that has partition currently mounted, but you can do this for LVM.

Your script will need to do something like the following:

  1. Detect if there is any additional space on the disk (e.g., parsing output of parted /dev/sda --script "print free")

  2. Create a new LVM partition with the additional space (e.g., parted /dev/sda --script "mkpart lvm ...")

  3. Create a new physical volume (e.g., pvcreate /dev/sda6 )

  4. Extend the volume group with this physical partition (e.g., vgextend vg00 /dev/sda6)

  5. Extend the logical volume contained the root partition by the amount of space (e.g., lvextend /dev/mapper/node-root /dev/sda6)

  6. Resize the root file system (e.g., resize2fs /dev/mapper/node-root).

You do not need to have a /boot partition, unless your image is an older Linux distribution that requires that /boot is not managed by LVM. You may elect to use a swap per

 No hard-coded MAC address information

You must remove the network persistence rules in the image as their presence will result in the network interface in the instance coming up as an interface other than eth0. This is because your image has a record of the MAC address of the network interface card when it was first installed, and this MAC address will be different each time the instance boots up. You should alter the following files:

  • Replace /etc/udev/rules.d/70-persistent-net.rules with an empty file (contains network persistence rules, including MAC address)

  • Replace /lib/udev/rules.d/75-persistent-net-generator.rules with an empty file (this generates the file above)

  • Remove the HWADDR line from /etc/sysconfig/network-scripts/ifcfg-eth0 on Fedora-based images

[Note]Note

If you delete the network persistent rules files, you may get a udev kernel warning at boot time, which is why we recommend replacing them with empty files instead.

 Ensure ssh server runs

You must install an ssh server into the image and ensure that it starts up on boot, or you will not be able to connect to your instance using ssh when it boots inside of OpenStack. This package is typically called openssh-server.

 Disable firewall

In general, we recommend that you disable any firewalls inside of your image and use OpenStack security groups to restrict access to instances. The reason is that having a firewall installed on your instance can make it more difficult to troubleshoot networking issues if you cannot connect to your instance.

 Access instance using ssh public key (cloud-init)

The typical way that users access virtual machines running on OpenStack is to ssh using public key authentication. For this to work, your virtual machine image must be configured to download the ssh public key from the OpenStack metadata service or config drive, at boot time.

 Using cloud-init to fetch the public key

The cloud-init package will automatically fetch the public key from the metadata server and place the key in an account. The account varies by distribution. On Ubuntu-based virtual machines, the account is called "ubuntu". On Fedora-based virtual machines, the account is called "ec2-user".

You can change the name of the account used by cloud-init by editing the /etc/cloud/cloud.cfg file and adding a line with a different user. For example, to configure cloud-init to put the key in an account named "admin", edit the config file so it has the line:

user: admin

 Writing a custom script to fetch the public key

If you are unable or unwilling to install cloud-init inside the guest, you can write a custom script to fetch the public and add it to a user account.

To fetch the ssh public key and add it to the root account, edit the /etc/rc.local file and add the following lines before the line “touch /var/lock/subsys/local”. This code fragment is taken from the rackerjoe oz-image-build CentOS 6 template

if [ ! -d /root/.ssh ]; then
  mkdir -p /root/.ssh
  chmod 700 /root/.ssh
fi

# Fetch public key using HTTP
ATTEMPTS=30
FAILED=0subl
while [ ! -f /root/.ssh/authorized_keys ]; do
  curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null
  if [ \$? -eq 0 ]; then
    cat /tmp/metadata-key >> /root/.ssh/authorized_keys
    chmod 0600 /root/.ssh/authorized_keys
    restorecon /root/.ssh/authorized_keys
    rm -f /tmp/metadata-key
    echo "Successfully retrieved public key from instance metadata"
    echo "*****************"
    echo "AUTHORIZED KEYS"
    echo "*****************"
    cat /root/.ssh/authorized_keys
    echo "*****************"
done
[Note]Note

Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with - (hyphen). If editing a file over a VNC session, make sure it's http: not http; and authorized_keys not authorized-keys.

 Process user data and other metadata (cloud-init)

In additional the ssh public key, an image may need to retrieve additional information from OpenStack, such as user data that the user submitted when requesting the image. For example, you may wish to set the host name of the instance to name given to the instance when it is booted. Or, you may wish to configure your image so that it executes user data content as a script on boot.

This information is accessible via the metadata service or the config drive. As the OpenStack metadata service is compatible with version 2009-04-04 of the Amazon EC2 metadata service, consult the Amazon EC2 documentation on Using Instance Metadata for details on how to retrieve user data.

The easiest way to support this type of functionality is to install the cloud-init package into your image, which is configured by default to treat user data as an executable script, and will set the host name.

 Paravirtualized Xen support in the kernel (Xen hypervisor only)

Prior to Linux kernel version 3.0, the mainline branch of the Linux kernel did not have support paravirtualized Xen virtual machine instances (what Xen calls DomU guests). If you are running the Xen hypervisor with paravirtualization, and you want to create an image for an older Linux distribution that has a pre 3.0 kernel, you will need to ensure that the image boots a kernel that has been compiled with Xen support.



loading table of contents...