Contents
How can I use an OpenStack cloud?
As an OpenStack cloud end user, you can provision your own resources within the limits set by administrators. The examples in this guide show you how to complete these tasks by using the OpenStack dashboard and command-line clients. The dashboard, also known as horizon, is a Web-based graphical interface. The command-line clients let you run simple commands to create and manage resources in a cloud and automate tasks by using scripts. Each of the core OpenStack projects has its own command-line client.
You can modify these examples for your specific use cases.
In addition to these ways of interacting with a cloud, you can access the OpenStack APIs indirectly through cURLcommands or open SDKs, or directly through the APIs. You can automate access or build tools to manage resources and services by using the native OpenStack APIs or the EC2 compatibility API.
To use the OpenStack APIs, it helps to be familiar with HTTP/1.1, RESTful web services, the OpenStack services, and JSON or XML data serialization formats.
OpenStack dashboard
As a cloud end user, the OpenStack dashboard lets you to provision your own resources within the limits set by administrators. You can modify these examples to create other types and sizes of server instances.
Overview
The following requirements must be fulfilled to access the OpenStack dashboard:
The cloud operator has set up an OpenStack cloud.
You have a recent Web browser that supports HTML5. It must have cookies and JavaScript enabled. To use the VNC client for the dashboard, which is based on noVNC, your browser must support HTML5 Canvas and HTML5 WebSockets. For more details and a list of browsers that support noVNC, seehttps://github.com/kanaka/noVNC/blob/master/README.mdhttps://github.com/kanaka/noVNC/blob/master/README.md, andhttps://github.com/kanaka/noVNC/wiki/Browser-supporthttps://github.com/kanaka/noVNC/wiki/Browser-support, respectively.
Learn how to log in to the dashboard and get a short overview of the interface.
Log in to the dashboard
To log in to the dashboard
Ask your cloud operator for the following information:
The hostname or public IP address from which you can access the dashboard.
The dashboard is available on the node that has the nova-dashboard server role.
The username and password with which you can log in to the dashboard.
Open a Web browser that supports HTML5. Make sure that JavaScript and cookies are enabled.
As a URL, enter the host name or IP address that you got from the cloud operator.
On the dashboard log in page, enter your user name and password and click Sign In.
After you log in, the following page appears:
The top-level row shows the username that you logged in with. You can also access Settingsor Sign Outof the Web interface.
If you are logged in as an end user rather than an admin user, the main screen shows only the Projecttab.
OpenStack dashboard – Project tab
This tab shows details for the projects, or projects, of which you are a member.
Select a project from the drop-down list on the left-hand side to access the following categories:
Overview
Shows basic reports on the project.
Instances
Lists instances and volumes created by users of the project.
From here, you can stop, pause, or reboot any instances or connect to them through virtual network computing (VNC).
Volumes
Lists volumes created by users of the project.
From here, you can create or delete volumes.
Images & Snapshots
Lists images and snapshots created by users of the project, plus any images that are publicly available. Includes volume snapshots. From here, you can create and delete images and snapshots, and launch instances from images and snapshots.
Access & Security
On the Security Groupstab, you can list, create, and delete security groups and edit rules for security groups.
On the Keypairstab, you can list, create, and import keypairs, and delete keypairs.
On the Floating IPstab, you can allocate an IP address to or release it from a project.
On the API Accesstab, you can list the API endpoints.
Manage images
During setup of OpenStack cloud, the cloud operator sets user permissions to manage images. Image upload and management might be restricted to only cloud administrators or cloud operators. Though you can complete most tasks with the OpenStack dashboard, you can manage images through only the glance and nova clients or the Image Service and Compute APIs.
Set up access and security
Before you launch a virtual machine, you can add security group rules to enable users to ping and SSH to the instances. To do so, you either add rules to the default security group or add a security group with rules. For information, seethe section called “Add security group rules”.
Keypairs are SSH credentials that are injected into images when they are launched. For this to work, the image must contain the cloud-init package. For information, seethe section called “Add keypairs”.
Add security group rules
The following procedure shows you how to add rules to the default security group.
To add rules to the default security group
Log in to the OpenStack dashboard.
If you are a member of multiple projects, select a project from the drop-down list at the top of the Projecttab.
Click the Access & Securitycategory.
The dashboard shows the security groups that are available for this project.
Select the default security group and click Edit Rules.
The Security Group Rulespage appears:
Add a TCP rule
Click Add Rule.
The Add Rulewindow appears.
In the IP Protocollist, select TCP.
In the Openlist, select Port.
In the Portbox, enter 22.
In the Sourcelist, select CIDR.
In the CIDRbox, enter 0.0.0.0/0.
Click Add.
Port 22 is now open for requests from any IP address.
If you want to accept requests from a particular range of IP addresses, specify the IP address block in the CIDRbox.
Add an ICMP rule
Click Add Rule.
The Add Rulewindow appears.
In the IP Protocollist, select ICMP.
In the Typebox, enter -1.
In the Codebox, enter -1.
In the Sourcelist, select CIDR.
In the CIDRbox, enter 0.0.0.0/0.
Click Add.
Add keypairs
Create at least one keypair for each project. If you have generated a keypair with an external tool, you can import it into OpenStack. The keypair can be used for multiple instances that belong to a project.
To add a keypair
Log in to the OpenStack dashboard.
If you are a member of multiple projects, select a project from the drop-down list at the top of the Projecttab.
Click the Access & Securitycategory.
Click the Keypairstab. The dashboard shows the keypairs that are available for this project.
To add a keypair
Click Create Keypair.
The Create Keypairwindow appears.
In the Keypair Namebox, enter a name for your keypair.
Click Create Keypair.
Respond to the prompt to download the keypair.
To import a keypair
Click Import Keypair.
The Import Keypairwindow appears.
In the Keypair Namebox, enter the name of your keypair.
In the Public Keybox, copy the public key.
Click Import Keypair.
Save the *.pem file locally and change its permissions so that only you can read and write to the file:
$ chmod 0600 MY_PRIV_KEY.pem
Use the ssh-addcommand to make the keypair known to SSH:
$ ssh-add MY_PRIV_KEY.pem
The public key of the keypair is registered in the Nova database.
The dashboard lists the keypair in the Access & Securitycategory.
Launch instances
Instances are virtual machines that run inside the cloud. You can launch an instance directly from one of the available OpenStack images or from an image that you have copied to a persistent volume. The OpenStack Image Service provides a pool of images that are accessible to members of different projects.
Launch an instance from an image
When you launch an instance from an image, OpenStack creates a local copy of the image on the respective compute node where the instance is started.
To launch an instance from an image
Log in to the OpenStack dashboard.
If you are a member of multiple projects, select a project from the drop-down list at the top of the Projecttab.
Click the Images & Snapshotcategory.
The dashboard shows the images that have been uploaded to OpenStack Image Service and are available for this project.
Select an image and click Launch.
In the Launch Imagewindow, specify the following:
Enter an instance name to assign to the virtual machine.
From the Flavordrop-down list, select the size of the virtual machine to launch.
Select a keypair.
In case an image uses a static root password or a static key set (neither is recommended), you do not need to provide a keypair to launch the instance.
In Instance Count, enter the number of virtual machines to launch from this image.
Activate the security groups that you want to assign to the instance.
Security groups are a kind of cloud firewall that define which incoming network traffic should be forwarded to instances. For details, seethe section called “Add security group rules”.
If you have not created any specific security groups, you can only assign the instance to the default security group.
If you want to boot from volume, click the respective entry to expand its options. Set the options as described inhttp://docs.openstack.org/user-guide/content/dashboard_launch_instances.html#dashboard_launch_instances_from_volumethe section called “Launch an instance from a volume”.
Click Launch Instance. The instance is started on any of the compute nodes in the cloud.
After you have launched an instance, switch to the Instancescategory to view the instance name, its (private or public) IP address, size, status, task, and power state.
Figure 5. OpenStack dashboard – Instances
If you did not provide a keypair, security groups, or rules so far, by default the instance can only be accessed from inside the cloud through VNC at this point. Even pinging the instance is not possible. To access the instance through a VNC console, seehttp://docs.openstack.org/user-guide/content/instance_console.htmlthe section called “Get a console to an instance”.
Launch an instance from a volume
You can launch an instance directly from an image that has been copied to a persistent volume.
In that case, the instance is booted from the volume, which is provided by nova-volume, through iSCSI.
For preparation details, seehttp://docs.openstack.org/user-guide/content/dashboard_manage_volumes.html#create_or_delete_volumesthe section called “Create or delete a volume”.
To boot an instance from the volume, especially note the following steps:
To be able to select from which volume to boot, launch an instance from an arbitrary image. The image you select does not boot. It is replaced by the image on the volume that you choose in the next steps.
In case you want to boot a Xen image from a volume, note the following requirement: The image you launch in must be the same type, fully virtualized or paravirtualized, as the one on the volume.
Select the volume or volume snapshot to boot from.
Enter a device name. Enter vda for KVM images or xvda for Xen images.
To launch an instance from a volume
You can launch an instance directly from one of the images available through the OpenStack Image Service or from an image that you have copied to a persistent volume. When you launch an instance from a volume, the procedure is basically the same as when launching an instance from an image in OpenStack Image Service, except for some additional steps.
Create a volume as described inhttp://docs.openstack.org/user-guide/content/dashboard_manage_volumes.html#create_or_delete_volumesthe section called “Create or delete a volume”.
It must be large enough to store an unzipped image.
Create an image.
For details, see Creating images manually in the OpenStack Virtual Machine Image Guide.
Launch an instance.
Attach the volume to the instance as described inhttp://docs.openstack.org/user-guide/content/dashboard_manage_volumes.html#attach_volumes_to_instancesthe section called “Attach volumes to instances”.
Assuming that the attached volume is mounted as /dev/vdb, use one of the following commands to copy the image to the attached volume:
For a raw image:
$ cat IMAGE >/dev/null
Alternatively, use dd.
For a non-raw image:
$ qemu-img convert -O raw IMAGE /dev/vdb
For a *.tar.bz2 image:
$ tar xfjO IMAGE >/dev/null
Only detached volumes are available for booting. Detach the volume.
To launch an instance from the volume, continue withhttp://docs.openstack.org/user-guide/content/dashboard_launch_instances.html#dashboard_launch_instances_from_imagethe section called “Launch an instance from an image”.
You can launch an instance directly from one of the images available through the OpenStack Image Service. When you do that, OpenStack creates a local copy of the image on the respective compute node where the instance is started.
SSH in to your instance
To SSH into your instance, you use the downloaded keypair file.
To SSH into your instance
Copy the IP address for your instance.
Use the SSH command to make a secure connection to the instance. For example:
$ ssh -i MyKey.pem [email protected]
A prompt asks, "Are you sure you want to continue connection (yes/no)?" Type yes and you have successfully connected.
Manage instances
Create instance snapshots
To create instance snapshots
Log in to the OpenStack dashboard.
If you are a member of multiple projects, select a project from the drop-down list at the top of the Projecttab.
Click the Instancescategory.
The dashboard lists the instances that are available for this project.
Select the instance of which to create a snapshot. From the Actionsdrop-down list, select Create Snapshot.
In the Create Snapshotwindow, enter a name for the snapshot. Click Create Snapshot. The dashboard shows the instance snapshot in the Images & Snapshotscategory.
To launch an instance from the snapshot, select the snapshot and click Launch. Proceed withhttp://docs.openstack.org/user-guide/content/dashboard_launch_instances.html#dashboard_launch_instances_from_imagethe section called “Launch an instance from an image”.
Control the state of an instance
To control the state of an instance
Log in to the OpenStack dashboard.
If you are a member of multiple projects, select a project from the drop-down list at the top of the Projecttab.
Click the Instancescategory.
The dashboard lists the instances that are available for this project.
Select the instance for which you want to change the state.
In the Moredrop-down list in the Actionscolumn, select the state.
Depending on the current state of the instance, you can choose to pause, un-pause, suspend, resume, soft or hard reboot, or terminate an instance.
Track usage
Use the dashboard's Overviewcategory to track usage of instances for each project.
You can track costs per month by showing metrics like number of VCPUs, disks, RAM, and uptime of all your instances.
To track usage
If you are a member of multiple projects, select a project from the drop-down list at the top of the Projecttab.
Select a month and click Submitto query the instance usage for that month.
Click Download CSV Summaryto download a CVS summary.
Manage volumes
Volumes are block storage devices that you can attach to instances. They allow for persistent storage as they can be attached to a running instance, or detached and attached to another instance at any time.
In contrast to the instance's root disk, the data of volumes is not destroyed when the instance is deleted.
Create or delete a volume
To create or delete a volume
Log in to the OpenStack dashboard.
If you are a member of multiple projects, select a Projectfrom the drop-down list at the top of the tab.
Click the Volumescategory.
To create a volume
Click Create Volume.
In the window that opens, enter a name to assign to a volume, a description (optional), and define the size in GBs.
Confirm your changes.
The dashboard shows the volume in the Volumescategory.
To delete one or multiple volumes
Activate the checkboxes in front of the volumes that you want to delete.
Click Delete Volumesand confirm your choice in the pop-up that appears.
A message indicates whether the action was successful.
After you create one or more volumes, you can attach them to instances.
You can attach a volume to one instance at a time.
View the status of a volume in the Instances & Volumescategory of the dashboard: the volume is either available or In-Use.
Attach volumes to instances
To attach volumes to instances
Log in to OpenStack dashboard.
If you are a member of multiple projects, select a Projectfrom the drop-down list at the top of the tab.
Click the Volumescategory.
Select the volume to add to an instance and click Edit Attachments.
In the Manage Volume Attachmentswindow, select an instance.
Enter a device name under which the volume should be accessible on the virtual machine.
Click Attach Volumeto confirm your changes. The dashboard shows the instance to which the volume has been attached and the volume's device name.
Now you can log in to the instance, mount the disk, format it, and use it.
To detach a volume from an instance
Select the volume and click Edit Attachments.
Click Detach Volumeand confirm your changes.
A message indicates whether the action was successful.
OpenStack command-line clients
Overview
You can use the OpenStack command-line clients to run simple commands that make API calls and automate tasks by using scripts. Internally, each client command runs cURL commands that embed API requests. The OpenStack APIs are RESTful APIs that use the HTTP protocol, including methods, URIs, media types, and response codes.
These open-source Python clients run on Linux or Mac OS X systems and are easy to learn and use. Each OpenStack service has its own command-line client. On some client commands, you can specify a debugparameter to show the underlying API request for the command. This is a good way to become familiar with the OpenStack API calls.
The following command-line clients are available for the respective services' APIs:
cinder(python-cinderclient)
Client for the Block Storage service API. Use to create and manage volumes.
glance(python-glanceclient)
Client for the Image Service API. Use to create and manage images.
keystone(python-keystoneclient)
Client for the Identity Service API. Use to create and manage users, tenants, roles, endpoints, and credentials.
nova(python-novaclient)
Client for the Compute API and its extensions. Use to create and manage images, instances, and flavors.
neutron(python-neutronclient)
Client for the Networking API. Use to configure networks for guest servers. This client was previously known as neutron.
swift(python-swiftclient)
Client for the Object Storage API. Use to gather statistics, list items, update metadata, upload, download and delete files stored by the object storage service. Provides access to a swift installation for ad hoc processing.
heat(python-heatclient)
Client for the Orchestration API. Use to launch stacks from templates, view details of running stacks including events and resources, and update and delete stacks.
Install the OpenStack command-line clients
To install the clients, install the prerequisite software and the Python package for each OpenStack client.
Install the clients
Use pipto install the OpenStack clients on a Mac OS X or Linux system. It is easy and ensures that you get the latest version of the client from thehttp://pypi.python.org/pypiPython Package Index. Also, piplets you update or remove a package. After you install the clients, you must source an openrc file to set required environment variables before you can request OpenStack services through the clients or the APIs.
To install the clients
You must install each client separately.
Run the following command to install or update a client package:
# pip install [--update] python-<project>client
Where <project> is the project name and has one of the following values:
nova. Compute API and extensions.
neutron. Networking API.
keystone. Identity Service API.
glance. Image Service API.
swift. Object Storage API.
cinder. Block Storage service API.
heat. Orchestration API.
For example, to install the nova client, run the following command:
# pip install python-novaclient
To update the nova client, run the following command:
# pip install --upgrade python-novaclient
To remove the nova client, run the following command:
# pip uninstall python-novaclient
Before you can issue client commands, you must download and source the openrc file to set environment variables. Proceed tothe section called “OpenStack RC file”.
Get the version for a client
After you install an OpenStack client, you can search for its version number, as follows:
$ pip freeze | grep python-
python-glanceclient==0.4.0python-keystoneclient==0.1.2-e git+https://github.com/openstack/python-novaclient.git@077cc0bf22e378c4c4b970f2331a695e440a939f#egg=python_novaclient-devpython-neutronclient==0.1.1python-swiftclient==1.1.1
You can also use the yolk -lcommand to see which version of the client is installed:
$ yolk -l | grep python-novaclient
python-novaclient - 2.6.10.27 - active development (/Users/your.name/src/cloud-servers/src/src/python-novaclient)python-novaclient - 2012.1 - non-active
OpenStack RC file
To set the required environment variables for the OpenStack command-line clients, you must download and source an environment file, openrc.sh. It is project-specific and contains the credentials used by OpenStack Compute, Image, and Identity services.
When you source the file and enter the password, environment variables are set for that shell. They allow the commands to communicate to the OpenStack services that run in the cloud.
You can download the file from the OpenStack dashboard as an administrative user or any other user.
To download the OpenStack RC file
Log in to the OpenStack dashboard.
On the Projecttab, select the project for which you want to download the OpenStack RC file.
Click Access & Security. Then, click Download OpenStack RC Fileand save the file.
Copy the openrc.sh file to the machine from where you want to run OpenStack commands.
For example, copy the file to the machine from where you want to upload an image with a glance client command.
On any shell from where you want to run OpenStack commands, source the openrc.sh file for the respective project.
In this example, we source the demo-openrc.sh file for the demo project:
$ source demo-openrc.sh
When you are prompted for an OpenStack password, enter the OpenStack password for the user who downloaded the openrc.sh file.
When you run OpenStack client commands, you can override some environment variable settings by using the options that are listed at the end of the nova helpoutput. For example, you can override the OS_PASSWORD setting in the openrc.sh file by specifying a password on a nova command, as follows:
$ nova --password <password> image-list
Where password is your password.
Manage images
During setup of OpenStack cloud, the cloud operator sets user permissions to manage images.
Image upload and management might be restricted to only cloud administrators or cloud operators.
After you upload an image, it is considered golden and you cannot change it.
You can upload images through the glance client or the Image Service API. You can also use the nova client to list images, set and delete image metadata, delete images, and take a snapshot of a running instance to create an image.
Manage images with the glance client
To list or get details for images
To list the available images:
$ glance image-list
You can use grep to filter the list, as follows:
$ glance image-list | grep 'cirros'
To get image details, by name or ID:
$ glance image-show myCirrosImage
To add an image
The following example uploads a CentOS 6.3 image in qcow2 format and configures it for public access:
$glance image-create --name centos63-image --disk-format=qcow2 --container-format=bare --is-public=True ./centos63.qcow2
To create an image
Write any buffered data to disk.
For more information, see theTaking Snapshots in the OpenStack Operations Guide.
To create the image, list instances to get the server ID:
$ nova list
In this example, the server is named myCirrosServer. Use this server to create a snapshot, as follows:
$ nova image-create myCirrosServer myCirrosImage
The command creates a qemu snapshot and automatically uploads the image to your repository. Only the tenant that creates the image has access to it.
Get details for your image to check its status:
$ nova image-show IMAGE
The image status changes from SAVING to ACTIVE. Only the tenant who creates the image has access to it.
To launch an instance from your image
To launch an instance from your image, include the image ID and flavor ID, as follows:
$ nova boot newServer --image 7e5142af-1253-4634-bcc6-89482c5f2e8a --flavor 3
Troubleshoot image creation
You cannot create a snapshot from an instance that has an attached volume. Detach the volume, create the image, and re-mount the volume.
Make sure the version of qemu you are using is version 0.14 or greater. Older versions of qemu result in an "unknown option -s" error message in the nova-compute.log.
Examine the /var/log/nova-api.log and /var/log/nova-compute.log log files for error messages.
Set up access and security for instances
When you launch a virtual machine, you can inject a key pair, which provides SSH access to your instance. For this to work, the image must contain the cloud-init package. Create at least one key pair for each project. If you generate a keypair with an external tool, you can import it into OpenStack. You can use the key pair for multiple instances that belong to that project. In case an image uses a static root password or a static key set – neither is recommended – you must not provide a key pair when you launch the instance.
A security group is a named collection of network access rules that you use to limit the types of traffic that have access to instances. When you launch an instance, you can assign one or more security groups to it. If you do not create security groups, new instances are automatically assigned to the default security group, unless you explicitly specify a different security group. The associated rules in each security group control the traffic to instances in the group. Any incoming traffic that is not matched by a rule is denied access by default. You can add rules to or remove rules from a security group. You can modify rules for the default and any other security group.
You must modify the rules for the default security group because users cannot access instances that use the default group from any IP address outside the cloud.
You can modify the rules in a security group to allow access to instances through different ports and protocols. For example, you can modify rules to allow access to instances through SSH, to ping them, or to allow UDP traffic – for example, for a DNS server running on an instance. You specify the following parameters for rules:
Source of traffic. Enable traffic to instances from either IP addresses inside the cloud from other group members or from all IP addresses.
Protocol. Choose TCP for SSH, ICMP for pings, or UDP.
Destination port on virtual machine. Defines a port range. To open a single port only, enter the same value twice. ICMP does not support ports: Enter values to define the codes and types of ICMP traffic to be allowed.
Rules are automatically enforced as soon as you create or modify them.
You can also assign a floating IP address to a running instance to make it accessible from outside the cloud. You assign a floating IP address to an instance and attach a block storage device, or volume, for persistent storage.
Add or import keypairs
To add a key
You can generate a keypair or upload an existing public key.
To generate a keypair, run the following command:
$ nova keypair-add KEY_NAME > MY_KEY.pem
The command generates a keypair named KEY_NAME, writes the private key to the MY_KEY.pem file, and registers the public key at the Nova database.
To set the permissions of the MY_KEY.pem file, run the following command:
$ chmod 600 MY_KEY.pem
The command changes the permissions of the MY_KEY.pem file so that only you can read and write to it.
To import a key
If you have already generated a keypair with the public key located at ~/.ssh/id_rsa.pub, run the following command to upload the public key:
$ nova keypair-add --pub_key ~/.ssh/id_rsa.pub KEY_NAME
The command registers the public key at the Nova database and names the keypair KEY_NAME.
List keypairs to make sure that the uploaded keypair appears in the list:
$ nova keypair-list
Configure security groups and rules
To configure security groups
To list all security groups
To list security groups for the current project, including descriptions, enter the following command:
$ nova secgroup-list
To create a security group
To create a security group with a specified name and description, enter the following command:
$ nova secgroup-create SEC_GROUP_NAME GROUP_DESCRIPTION
To delete a security group
To delete a specified group, enter the following command:
$ nova secgroup-delete SEC_GROUP_NAME
To configure security group rules
Modify security group rules with the nova secgroup-*-rulecommands.
On a shell, source the OpenStack RC file. For details, seehttp://docs.openstack.org/user-guide/content/cli_openrc.htmlthe section called “OpenStack RC file”.
To list the rules for a security group
$ nova secgroup-list-rules SEC_GROUP_NAME
To allow SSH access to the instances
Choose one of the following sub-steps:
Add rule for all IPs
Either from all IP addresses (specified as IP subnet in CIDR notation as 0.0.0.0/0):
$ nova secgroup-add-rule SEC_GROUP_NAME tcp 22 22 0.0.0.0/0
Add rule for security groups
Alternatively, you can allow only IP addresses from other security groups (source groups) to access the specified port:
$ nova secgroup-add-group-rule --ip_proto tcp --from_port 22 \ --to_port 22 SEC_GROUP_NAME SOURCE_GROUP_NAME
To allow pinging the instances
Choose one of the following sub-steps:
To allow pinging from IPs
Specify all IP addresses as IP subnet in CIDR notation: 0.0.0.0/0. This command allows access to all codes and all types of ICMP traffic, respectively:
$ nova secgroup-add-rule SEC_GROUP_NAME icmp -1 -1 0.0.0.0/0
To allow pinging from other security groups
To allow only members of other security groups (source groups) to ping instances:
$ nova secgroup-add-group-rule --ip_proto icmp --from_port -1 \ --to_port -1 SEC_GROUP_NAME SOURCE_GROUP_NAME
To allow access through UDP port
To allow access through a UDP port, such as allowing access to a DNS server that runs on a VM, complete one of the following sub-steps:
To allow UDP access from IPs
Specify all IP addresses as IP subnet in CIDR notation: 0.0.0.0/0.
$ nova secgroup-add-rule SEC_GROUP_NAME udp 53 53 0.0.0.0/0
To allow UDP access
To allow only IP addresses from other security groups (source groups) to access the specified port:
$ nova secgroup-add-group-rule --ip_proto udp --from_port 53 \ --to_port 53 SEC_GROUP_NAME SOURCE_GROUP_NAME
To delete a security group rule, specify the same arguments that you used to create the rule.
To delete the security rule that you created inStep 3.a:
$ nova secgroup-delete-rule SEC_GROUP_NAME tcp 22 22 0.0.0.0/0
To delete the security rule that you created inStep 3.b:
$ nova secgroup-delete-group-rule --ip_proto tcp --from_port 22 \ --to_port 22 SEC_GROUP_NAME SOURCE_GROUP_NAME
Launch instances
Instances are virtual machines that run inside the cloud.
Before you can launch an instance, you must gather parameters such as the image and flavor from which you want to launch your instance.
You can launch an instance directly from one of the available OpenStack images or from an image that you have copied to a persistent volume. The OpenStack Image Service provides a pool of images that are accessible to members of different projects.
Gather parameters to launch an instance
To launch an instance, you must specify the following parameters:
The instance source, which is an image or snapshot. Alternatively, you can boot from a volume, which is block storage, to which you've copied an image or snapshot.
The image or snapshot, which represents the operating system.
A name for your instance.
The flavor for your instance, which defines the compute, memory, and storage capacity of nova computing instances. A flavor is an available hardware configuration for a server. It defines the "size" of a virtual server that can be launched. For more details and a list of default flavors available, see Section 1.5, "Managing Flavors," (⇽ User Guide for Administrators ).
User Data is a special key in the metadata service which holds a file that cloud aware applications within the guest instance can access. For example thecloudinitsystem is an open source package from Ubuntu that handles early initialization of a cloud instance that makes use of this user data.
Access and security credentials, which include one or both of the following credentials:
A key-pair for your instance, which are SSH credentials that are injected into images when they are launched. For this to work, the image must contain the cloud-init package. Create at least one keypair for each project. If you already have generated a key-pair with an external tool, you can import it into OpenStack. You can use the keypair for multiple instances that belong to that project. For details, refer to Section 1.5.1, Creating or Importing Keys.
A security group, which defines which incoming network traffic is forwarded to instances. Security groups hold a set of firewall policies, known as security group rules. For details, see xx.
If needed, you can assign a floating (public) IP addressto a running instance and attach a block storage device, or volume, for persistent storage. For details, see Section 1.5.3, Managing IP Addresses and Section 1.7, Managing Volumes.
After you gather the parameters you need to launch an instance, you can launch it from animageor avolume.
To gather the parameters to launch an instance
On a shell, source the OpenStack RC file.
List the available flavors:
$ nova flavor-list
Note the ID of the flavor that you want to use for your instance.
List the available images:
$ nova image-list
You can also filter the image list by using grep to find a specific image, like this:
$ nova image-list | grep 'kernel'
Note the ID of the image that you want to boot your instance from.
List the available security groups:
$ nova secgroup-list --all-tenants
If you have not created any security groups, you can assign the instance to only the default security group.
You can also list rules for a specified security group:
$ nova secgroup-list-rules default
In this example, the default security group has been modified to allow HTTP traffic on the instance by permitting TCP traffic on Port 80.
List the available keypairs.
$ nova keypair-list
Note the name of the keypair that you use for SSH access.
Launch an instance from an image
Use this procedure to launch an instance from an image.
To launch an instance from an image
Now you have all parameters required to launch an instance, run the following command and specify the server name, flavor ID, and image ID. Optionally, you can provide a key name for access control and security group for security. You can also include metadata key and value pairs. For example you can add a description for your server by providing the --meta description="My Server"parameter.
You can pass user data in a file on your local system and pass it at instance launch by using the flag --user-data <user-data-file>.
$ nova boot --flavor FLAVOR_ID --image IMAGE_ID --key_name KEY_NAME --user-data mydata.file \ --security_group SEC_GROUP NAME_FOR_INSTANCE --meta KEY=VALUE --meta KEY=VALUE
The command returns a list of server properties, depending on which parameters you provide.
A status of BUILD indicates that the instance has started, but is not yet online.
A status of ACTIVE indicates that your server is active.
Copy the server ID value from the id field in the output. You use this ID to get details for or delete your server.
Copy the administrative password value from the adminPass field. You use this value to log into your server.
Check if the instance is online:
$ nova list
This command lists all instances of the project you belong to, including their ID, their name, their status, and their private (and if assigned, their public) IP addresses.
If the status for the instance is ACTIVE, the instance is online.
To view the available options for the nova listcommand, run the following command:
$ nova help list
If you did not provide a keypair, security groups, or rules, you can only access the instance from inside the cloud through VNC. Even pinging the instance is not possible.
Launch an instance from a volume
After youcreate a bootable volume, youlaunch an instance from the volume.
To launch an instance from a volume
To create a bootable volume
To create a volume from an image, run the following command:
# cinder create --image-id 397e713c-b95b-4186-ad46-6126863ea0a9 --display-name my-bootable-vol 8
Optionally, to configure your volume, see the Configuring Image Service and Storage for Computechapter in the OpenStack Configuration Reference.
To list volumes
Enter the following command:
$ nova volume-list
Copy the value in the ID field for your volume.
To launch an instance
Enter the nova boot command with the --block_device_mapping parameter, as follows:
$ nova boot --flavor <flavor> --block_device_mapping <dev_name>=<id>:<type>:<size>:<delete_on_terminate> <name>
The command arguments are:
--flavor flavor
The flavor ID.
--block_device_mapping dev- name=id:type:size:delete-on-terminate
dev-name. A device name where the volume is attached in the system at /dev/dev_name. This value is typically vda.
id. The ID of the volume to boot from, as shown in the output of nova volume-list.
type. Either snap or any other value, including a blank string. snap means that the volume was created from a snapshot.
size. The size of the volume, in GBs. It is safe to leave this blank and have the Compute service infer the size.
delete-on-terminate. A boolean that indicates whether the volume should be deleted when the instance is terminated. You can specify
True or 1
False or 0
name
The name for the server.
For example, you might enter the following command to boot from a volume with ID bd7cf584-45de-44e3-bf7f-f7b50bf235e. The volume is not deleted when the instance is terminated:
$ nova boot --flavor 2 --image 397e713c-b95b-4186-ad46-6126863ea0a9 --block_device_mapping vda=bd7cf584-45de-44e3-bf7f-f7b50bf235e3:::0 myInstanceFromVolume
Now when you list volumes, you can see that the volume is attached to a server:
$ nova volume-list
Additionally, when you list servers, you see the server that you booted from a volume:
$ nova list
Manage instances and hosts
Instances are virtual machines that run inside the cloud.
Manage IP addresses
Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
One floating IP address can be assigned to only one instance at a time. Floating IP addresses can be managed with the nova *floating-ip-*commands, provided by the python-novaclient package.
To list pools with floating IP addresses
To list all pools that provide floating IP addresses:
$ nova floating-ip-pool-list
To allocate a floating IP address to the current project
The output of the following command shows the freshly allocated IP address:
$ nova floating-ip-pool-list
If more than one pool of IP addresses is available, you can also specify the pool from which to allocate the IP address:
$ floating-ip-create POOL_NAME
To list floating IP addresses allocated to the current project
If an IP is already associated with an instance, the output also shows the IP for the instance, thefixed IP address for the instance, and the name of the pool that provides the floating IP address.
$ nova floating-ip-list
To release a floating IP address from the current project
The IP address is returned to the pool of IP addresses that are available for all projects. If an IP address is currently assigned to a running instance, it is automatically disassociated from the instance.
$ nova floating-ip-delete FLOATING_IP
To assign a floating IP address to an instance
To associate an IP address with an instance, one or multiple floating IP addresses must be allocated to the current project. Check this with:
$ nova floating-ip-list
In addition, you must know the instance's name (or ID). To look up the instances that belong to the current project, use the nova list command.
$ nova add-floating-ip INSTANCE_NAME_OR_ID FLOATING_IP
After you assign the IP with nova add-floating-ipand configure security group rules for the instance, the instance is publicly available at the floating IP address.
To remove a floating IP address from an instance
To remove a floating IP address from an instance, you must specify the same arguments that you used to assign the IP.
$ nova remove-floating-ip INSTANCE_NAME_OR_ID FLOATING_IP
Change the size of your server
You change the size of a server by changing its flavor.
To change the size of your server
List the available flavors:
$ nova flavor-list
Show information about your server, including its size:
$ nova show myCirrosServer
The size of the server is m1.small (2).
To resize the server, pass the server ID and the desired flavor to the nova resizecommand. Include the --poll parameter to report the resize progress.
$ nova resize myCirrosServer 4 --poll
Instance resizing... 100% completeFinished
Show the status for your server:
$ nova list
When the resize completes, the status becomes VERIFY_RESIZE. To confirm the resize:
$ nova resize-confirm 6beefcf7-9de6-48b3-9ba9-e11b343189b3
The server status becomes ACTIVE.
If the resize fails or does not work as expected, you can revert the resize:
$ nova resize-revert 6beefcf7-9de6-48b3-9ba9-e11b343189b3
The server status becomes ACTIVE.
Stop and start an instance
Use one of the following methods to stop and start an instance.
Pause and un-pause an instance
To pause and un-pause a server
To pause a server, run the following command:
$ nova pause SERVER
This command stores the state of the VM in RAM. A paused instance continues to run in a frozen state.
To un-pause the server, run the following command:
$ nova unpause SERVER
Suspend and resume an instance
To suspend and resume a server
Administrative users might want to suspend an infrequently used instance or to perform system maintenance.
When you suspend an instance, its VM state is stored on disk, all memory is written to disk, and the virtual machine is stopped. Suspending an instance is similar to placing a device in hibernation; memory and vCPUs become available.
To initiate a hypervisor-level suspend operation, run the following command:
$ nova suspend SERVER
To resume a suspended server:
$ nova resume SERVER
Reboot an instance
You can perform a soft or hard reboot of a running instance. A soft reboot attempts a graceful shutdown and restart of the instance. A hard reboot power cycles the instance.
To reboot a server
By default, when you reboot a server, it is a soft reboot.
$ nova reboot SERVER
To perform a hard reboot, pass the --hard parameter, as follows:
$ nova reboot --hard SERVER
Evacuate instances
If a cloud compute node fails due to a hardware malfunction or another reason, you can evacuate instances to make them available again.
You can choose evacuation parameters for your use case.
To preserve user data on server disk, you must configure shared storage on the target host. Also, you must validate that the current VM host is down. Otherwise the evacuation fails with an error.
To evacuate your server
To find a different host for the evacuated instance, run the following command to lists hosts:
$ nova host-list
You can pass the instance password to the command by using the --password <pwd> option. If you do not specify a password, one is generated and printed after the command finishes successfully. The following command evacuates a server without shared storage:
$ nova evacuate evacuated_server_name host_b
The command evacuates an instance from a down host to a specified host. The instance is booted from a new disk, but preserves its configuration including its ID, name, uid, IP address, and so on. The command returns a password:
To preserve the user disk data on the evacuated server, deploy OpenStack Compute with shared filesystem.
$ nova evacuate evacuated_server_name host_b --on-shared-storage
Delete an instance
When you no longer need an instance, you can delete it.
To delete an instance
List all instances:
$ nova list
Use the following command to delete the newServer instance, which is in ERROR state:
$ nova delete newServer
The command does not notify that your server was deleted.
Instead, run the nova list command:
$ nova list
The deleted instance does not appear in the list.
Get a console to an instance
To get a console to an instance
To get a VNC console to an instance, run the following command:
$ nova get-vnc-console myCirrosServer xvpvnc
The command returns a URL from which you can access your instance:
Manage bare metal nodes
If you use the bare metal driver, you must create a bare metal node and add a network interface to it. You then launch an instance from a bare metal image. You can list and delete bare metal nodes. When you delete a node, any associated network interfaces are removed. You can list and remove network interfaces that are associated with a bare metal node.
Commands
baremetal-interface-add
Adds a network interface to a bare metal node.
baremetal-interface-list
Lists network interfaces associated with a bare metal node.
baremetal-interface-remove
Removes a network interface from a bare metal node.
baremetal-node-create
Creates a bare metal node.
baremetal-node-delete
Removes a bare metal node and any associated interfaces.
baremetal-node-list
Lists available bare metal nodes.
baremetal-node-show
Shows information about a bare metal node.
To manage bare metal nodes
Create a bare metal node.
$ nova baremetal-node-create --pm_address=1.2.3.4 --pm_user=ipmi --pm_password=ipmi $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff
Add network interface information to the node:
$ nova baremetal-interface-add 1 aa:bb:cc:dd:ee:ff
Launch an instance from a bare metal image:
$ nova boot --image my-baremetal-image --flavor my-baremetal-flavor test
|... wait for instance to become active ...
You can list bare metal nodes and interfaces. When a node is in use, its status includes the UUID of the instance that runs on it:
$ nova baremetal-node-list
Show details about a bare metal node:
$ nova baremetal-node-show 1
Show usage statistics for hosts and instances
You can show basic statistics on resource usage for hosts and instances.
To show host usage statistics
List the hosts and the nova-related services that run on them:
$ nova host-list
Get a summary of resource usage of all of the instances running on the host.
$ nova host-describe devstack-grizzly
The cpu column shows the sum of the virtual CPUs for instances running on the host.
The memory_mb column shows the sum of the memory (in MB) allocated to the instances that run on the hosts.
The disk_gb column shows the sum of the root and ephemeral disk sizes (in GB) of the instances that run on the hosts.
To show instance usage statistics
Get CPU, memory, I/O, and network statistics for an instance.
First, list instances:
$ nova list
Then, get diagnostic statistics:
$ nova diagnostics myCirrosServer
Get summary statistics for each tenant:
$ nova usage-list
Usage from 2013-06-25 to 2013-07-24:
Create and manage networks
Before you run commands, set the following environment variables:
export OS_USERNAME=adminexport OS_PASSWORD=passwordexport OS_TENANT_NAME=adminexport OS_AUTH_URL=http://localhost:5000/v2.0
To create and manage networks
List the extensions of the system:
$ neutron ext-list -c alias -c name
Create a network:
$ neutron net-create net1
Created a new network:
Create a network with specified provider network type:
$ neutron net-create net2 --provider:network-type local
Created a new network:
Just as shown previous, the unknown option --provider:network-type is used to create a local provider network.
Create a subnet:
$ neutron subnet-create net1 192.168.2.0/24 --name subnet1
Created a new subnet:
In the previous command, net1 is the network name, 192.168.2.0/24 is the subnet's CIDR. They are positional arguments. --name subnet1 is an unknown option, which specifies the subnet's name.
Create a port with specified IP address:
$ neutron port-create net1 --fixed-ip ip_address=192.168.2.40
Created a new port:
In the previous command, net1 is the network name, which is a positional argument. --fixed-ip ip_address=192.168.2.40 is an option, which specifies the port's fixed IP address we wanted.
Create a port without specified IP address:
$ neutron port-create net1
Created a new port:
We can see that the system will allocate one IP address if we don't specify the IP address in command line.
Query ports with specified fixed IP addresses:
$ neutron port-list --fixed-ips ip_address=192.168.2.2 ip_address=192.168.2.40
--fixed-ips ip_address=192.168.2.2 ip_address=192.168.2.40 is one unknown option.
How to find unknown options?The unknown options can be easily found by watching the output of create_xxx or show_xxx command. For example, in the port creation command, we see the fixed_ips fields, which can be used as an unknown option.
Create and manage stacks
To create a stack from an example template file
To create a stack, or template, from anexample template file, run following command:
$ heat stack-create mystack --template-file=/path/to/heat/templates/WordPress_Single_Instance.template--parameters="InstanceType=m1.large;DBUsername=wp;DBPassword=verybadpassword;KeyName=heat_key;LinuxDistribution=F17"
The --parameters values that you specify depend on which parameters are defined in the template. If the template file is hosted on a website, you can specify the URL with --template-url parameter instead of the --template-file parameter.
The command returns the following output:
You can also use the stack-createcommand to validate a template file without creating a stack from it.
To do so, run the following command:
$ heat stack-create mystack --template-file=/path/to/heat/templates/WordPress_Single_Instance.template
If validation fails, the response returns an error message.
To list stacks
To see which stacks are visible to the current user, run the following command:
$ heat stack-list
To view stack details
To explore the state and history of a particular stack, you can run a number of commands.
To show the details of a stack, run the following command:
$ heat stack-show mystack
A stack consists of a collection of resources. To list the resources, including their status, in a stack, run the following command:
$ heat resource-list mystack
To show the details for the specified resource in a stack, run the following command:
$ heat resource-show mystack WikiDatabase
Some resources have associated metadata which can change throughout the life-cycle of a resource:
$ heat resource-metadata mystack WikiDatabase
A series of events is generated during the life-cycle of a stack. This command will display those events.
$ heat event-list mystack
To show the details for a particular event, run the following command:
$ heat event-show WikiDatabase 1
To update a stack
To update an existing stack from a modified template file, run a command like the following command:
$ heat stack-update mystack --template-file=/path/to/heat/templates/WordPress_Single_Instance_v2.template --parameters="InstanceType=m1.large;DBUsername=wp;DBPassword=verybadpassword;KeyName=heat_key;LinuxDistribution=F17"
Some resources are updated in-place, while others are replaced with new resources.
The Identity service performs these functions:
User management. Tracks users and their permissions.
Service catalog. Provides a catalog of available services with their API endpoints.
To understand the Identity Service, you must understand these concepts:
- User
Digital representation of a person, system, or service who uses OpenStack cloud services. Identity authentication services will validate that incoming request are being made by the user who claims to be making the call. Users have a login and may be assigned tokens to access resources. Users may be directly assigned to a particular tenant and behave as if they are contained in that tenant.
- Credentials
Data that is known only by a user that proves who they are. In the Identity Service, examples are:
Username and password
Username and API key
An authentication token provided by the Identity Service
- Authentication
The act of confirming the identity of a user. The Identity Service confirms an incoming request by validating a set of credentials supplied by the user. These credentials are initially a username and password or a username and API key. In response to these credentials, the Identity Service issues the user an authentication token, which the user provides in subsequent requests.
- Token
An arbitrary bit of text that is used to access resources. Each token has a scope which describes which resources are accessible with it. A token may be revoked at anytime and is valid for a finite duration.
While the Identity Service supports token-based authentication in this release, the intention is for it to support additional protocols in the future. The intent is for it to be an integration service foremost, and not aspire to be a full-fledged identity store and management solution.
- Tenant
A container used to group or isolate resources and/or identity objects. Depending on the service operator, a tenant may map to a customer, account, organization, or project.
- Service
An OpenStack service, such as Compute (Nova), Object Storage (Swift), or Image Service (Glance). Provides one or more endpoints through which users can access resources and perform operations.
- Endpoint
An network-accessible address, usually described by URL, from where you access a service. If using an extension for templates, you can create an endpoint template, which represents the templates of all the consumable services that are available across the regions.
- Role
A personality that a user assumes that enables them to perform a specific set of operations. A role includes a set of rights and privileges. A user assuming that role inherits those rights and privileges.
In the Identity Service, a token that is issued to a user includes the list of roles that user can assume. Services that are being called by that user determine how they interpret the set of roles a user has and which operations or resources each role grants access to.
- User management
The main components of Identity user management are:
Users
Tenants
Roles
A user represents a human user, and has associated information such as username, password and email. This example creates a user named "alice":
$ keystone user-create --name=alice --pass=mypassword123 [email protected]
A tenant can be a project, group, or organization. Whenever you make requests to OpenStack services, you must specify a tenant. For example, if you query the Compute service for a list of running instances, you get a list of all running instances for the specified tenant. This example creates a tenant named "acme":
$ keystone tenant-create --name=acme
A role captures what operations a user is permitted to perform in a given tenant. This example creates a role named "compute-user":
$ keystone role-create --name=compute-user
The Identity service associates a user with a tenant and a role. To continue with our previous examples, we may wish to assign the "alice" user the "compute-user" role in the "acme" tenant:
$ keystone user-list
$ keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2
A user can be assigned different roles in different tenants. For example, Alice may also have the "admin" role in the "Cyberdyne" tenant. A user can also be assigned multiple roles in the same tenant.
The
/etc/[SERVICE_CODENAME]/policy.json
file controls what users are allowed to do for a given service. For example,/etc/nova/policy.json
specifies the access policy for the Compute service,/etc/glance/policy.json
specifies the access policy for the Image Service, and/etc/keystone/policy.json
specifies the access policy for the Identity service.The default policy.json files in the Compute, Identity, and Image Service recognize only the admin role: all operations that do not require the admin role will be accessible by any user that has any role in a tenant.
If you wish to restrict users from performing operations in, say, the Compute service, you need to create a role in the Identity service and then modify
/etc/nova/policy.json
so that this role is required for Compute operations.For example, this line in
/etc/nova/policy.json
specifies that there are no restrictions on which users can create volumes: if the user has any role in a tenant, they will be able to create volumes in that tenant.- Service Management
The Identity Service provides the following service management functions:
Services
Endpoints
The Identity Service also maintains a user that corresponds to each service, such as a user named nova, for the Compute service) and a special service tenant, which is called service.
The commands for creating services and endpoints are described in a later section.
AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP broker, either RabbitMQ or Qpid, sits between any two Nova components and allows them to communicate in a loosely coupled fashion. More precisely, Nova components (the compute fabric of OpenStack) use Remote Procedure Calls (RPC hereinafter) to communicate to one another; however such a paradigm is built atop the publish/subscribe paradigm so that the following benefits can be achieved:
Decoupling between client and servant (such as the client does not need to know where the servant reference is).
Full a-synchronism between client and servant (such as the client does not need the servant to run at the same time of the remote call).
Random balancing of remote calls (such as if more servants are up and running, one-way calls are transparently dispatched to the first available servant).
Nova uses direct, fanout, and topic-based exchanges. The architecture looks like the one depicted in the figure below:
Nova implements RPC (both request+response, and one-way, respectively nicknamed ‘rpc.call’ and ‘rpc.cast’) over AMQP by providing an adapter class which take cares of marshaling and un-marshaling of messages into function calls. Each Nova service, such as Compute, Scheduler, and so on, creates two queues at the initialization time, one which accepts messages with routing keys ‘NODE-TYPE.NODE-ID’, for example, compute.hostname, and another, which accepts messages with routing keys as generic ‘NODE-TYPE’, for example compute. The former is used specifically when Nova-API needs to redirect commands to a specific node like ‘euca-terminate instance’. In this case, only the compute node whose host’s hypervisor is running the virtual machine can kill the instance. The API acts as a consumer when RPC calls are request/response, otherwise is acts as publisher only.
Nova RPC Mappings
The figure below shows the internals of a message broker node (referred to as a RabbitMQ node in the diagrams) when a single instance is deployed and shared in an OpenStack cloud. Every component within Nova connects to the message broker and, depending on its personality, such as a compute node or a network node, may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Compute or Network). Invokers and Workers do not actually exist in the Nova object model, but in this example they are used as an abstraction for the sake of clarity. An Invoker is a component that sends messages in the queuing system using rpc.call and rpc.cast. A worker is a component that receives messages from the queuing system and replies accordingly to rcp.call operations.
Figure 2 shows the following internal elements:
Topic Publisher: A Topic Publisher comes to life when an rpc.call or an rpc.cast operation is executed; this object is instantiated and used to push a message to the queuing system. Every publisher connects always to the same topic-based exchange; its life-cycle is limited to the message delivery.
Direct Consumer: A Direct Consumer comes to life if (an only if) a rpc.call operation is executed; this object is instantiated and used to receive a response message from the queuing system; Every consumer connects to a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message delivery; the exchange and queue identifiers are determined by a UUID generator, and are marshaled in the message sent by the Topic Publisher (only rpc.call operations).
Topic Consumer: A Topic Consumer comes to life as soon as a Worker is instantiated and exists throughout its life-cycle; this object is used to receive messages from the queue and it invokes the appropriate action as defined by the Worker role. A Topic Consumer connects to the same topic-based exchange either via a shared queue or via a unique exclusive queue. Every Worker has two topic consumers, one that is addressed only during rpc.cast operations (and it connects to a shared queue whose exchange key is ‘topic’) and the other that is addressed only during rpc.call operations (and it connects to a unique queue whose exchange key is ‘topic.host’).
Direct Publisher: A Direct Publisher comes to life only during rpc.call operations and it is instantiated to return the message required by the request/response operation. The object connects to a direct-based exchange whose identity is dictated by the incoming message.
Topic Exchange: The Exchange is a routing table that exists in the context of a virtual host (the multi-tenancy mechanism provided by Qpid or RabbitMQ); its type (such as topic vs. direct) determines the routing policy; a message broker node will have only one topic-based exchange for every topic in Nova.
Direct Exchange: This is a routing table that is created during rpc.call operations; there are many instances of this kind of exchange throughout the life-cycle of a message broker node, one for each rpc.call invoked.
Queue Element: A Queue is a message bucket. Messages are kept in the queue until a Consumer (either Topic or Direct Consumer) connects to the queue and fetch it. Queues can be shared or can be exclusive. Queues whose routing key is ‘topic’ are shared amongst Workers of the same personality.
RPC Calls
The diagram below shows the message flow during an rp.call operation:
A Topic Publisher is instantiated to send the message request to the queuing system; immediately before the publishing operation. A Direct Consumer is instantiated to wait for the response message.
Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as ‘topic.host’) and passed to the Worker in charge of the task.
Once the task is completed, a Direct Publisher is allocated to send the response message to the queuing system.
Once the message is dispatched by the exchange, it is fetched by the Direct Consumer dictated by the routing key (such as ‘msg_id’) and passed to the Invoker.
RPC Casts
The diagram below the message flow during an rp.cast operation:
A Topic Publisher is instantiated to send the message request to the queuing system.
Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as ‘topic’) and passed to the Worker in charge of the task.
AMQP Broker Load
At any given time the load of a message broker node running either Qpid or RabbitMQ is a function of the following parameters:
Throughput of API calls: the number of API calls (more precisely rpc.call ops) being served by the OpenStack cloud dictates the number of direct-based exchanges, related queues and direct consumers connected to them.
Number of Workers: there is one queue shared amongst workers with the same personality; however there are as many exclusive queues as the number of workers; the number of workers dictates also the number of routing keys within the topic-based exchange, which is shared amongst all workers.
The figure below shows the status of a RabbitMQ node after Nova components’ bootstrap in a test environment. Exchanges and queues being created by Nova components are:
Exchanges
nova (topic exchange)
Queues
compute.phantom (phantom is the hostname)
compute
network.phantom (phantom is the hostname)
network
scheduler.phantom (phantom is the hostname)
scheduler
RabbitMQ Gotchas
Nova uses Kombu to connect to the RabbitMQ environment. Kombu is a Python library that in turn uses AMQPLib, a library that implements the standard AMQP 0.8 at the time of writing. When using Kombu, Invokers and Workers need the following parameters in order to instantiate a Connection object that connects to the RabbitMQ server (please note that most of the following material can be also found in the Kombu documentation; it has been summarized and revised here for the sake of clarity):
Hostname: The hostname to the AMQP server.
Userid: A valid username used to authenticate to the server.
Password: The password used to authenticate to the server.
Virtual_host: The name of the virtual host to work with. This virtual host must exist on the server, and the user must have access to it. Default is “/”.
Port: The port of the AMQP server. Default is 5672 (amqp).
The following parameters are default:
Insist: Insist on connecting to a server. In a configuration with multiple load-sharing servers, the Insist option tells the server that the client is insisting on a connection to the specified server. Default is False.
Connect_timeout: The timeout in seconds before the client gives up connecting to the server. The default is no timeout.
SSL: Use SSL to connect to the server. The default is False.
More precisely consumers need the following parameters:
Connection: The above mentioned Connection object.
Queue: Name of the queue.
Exchange: Name of the exchange the queue binds to.
Routing_key: The interpretation of the routing key depends on the value of the exchange_type attribute.
Direct exchange: If the routing key property of the message and the routing_key attribute of the queue are identical, then the message is forwarded to the queue.
Fanout exchange: Messages are forwarded to the queues bound the exchange, even if the binding does not have a key.
Topic exchange: If the routing key property of the message matches the routing key of the key according to a primitive pattern matching scheme, then the message is forwarded to the queue. The message routing key then consists of words separated by dots (”.”, like domain names), and two special characters are available; star (“”) and hash (“#”). The star matches any word, and the hash matches zero or more words. For example ”.stock.#” matches the routing keys “usd.stock” and “eur.stock.db” but not “stock.nasdaq”.
Durable: This flag determines the durability of both exchanges and queues; durable exchanges and queues remain active when a RabbitMQ server restarts. Non-durable exchanges/queues (transient exchanges/queues) are purged when a server restarts. It is worth noting that AMQP specifies that durable queues cannot bind to transient exchanges. Default is True.
Auto_delete: If set, the exchange is deleted when all queues have finished using it. Default is False.
Exclusive: Exclusive queues (such as non-shared) may only be consumed from by the current connection. When exclusive is on, this also implies auto_delete. Default is False.
Exchange_type: AMQP defines several default exchange types (routing algorithms) that covers most of the common messaging use cases.
Auto_ack: Acknowledgement is handled automatically once messages are received. By default auto_ack is set to False, and the receiver is required to manually handle acknowledgment.
No_ack: It disables acknowledgement on the server-side. This is different from auto_ack in that acknowledgement is turned off altogether. This functionality increases performance but at the cost of reliability. Messages can get lost if a client dies before it can deliver them to the application.
Auto_declare: If this is True and the exchange name is set, the exchange will be automatically declared at instantiation. Auto declare is on by default. Publishers specify most the parameters of consumers (they do not specify a queue name), but they can also specify the following:
Delivery_mode: The default delivery mode used for messages. The value is an integer. The following delivery modes are supported by RabbitMQ:
1 or “transient”: The message is transient. Which means it is stored in memory only, and is lost if the server dies or restarts.
2 or “persistent”: The message is persistent. Which means the message is stored both in-memory, and on disk, and therefore preserved if the server dies or restarts.
The default value is 2 (persistent). During a send operation, Publishers can override the delivery mode of messages so that, for example, transient messages can be sent over a durable queue.
Before you can use keystone client commands, you must download and source an OpenStack RC file. For information, see the OpenStack Admin User Guide.
The keystone command-line client uses the following syntax:
$ keystonePARAMETER
COMMAND
ARGUMENT
For example, you can run the user-list and tenant-create commands, as follows:
# Using OS_SERVICE_ENDPOINT and OS_SERVICE_TOKEN environment variables $ export OS_SERVICE_ENDPOINT=http://127.0.0.1:5000/v2.0/ $ export OS_SERVICE_TOKEN=secrete_token $ keystone user-list $ keystone tenant-create --name demo # Using --os-token and os-endpoint parameters $ keystone --os-tokentoken
--os-endpointendpoint
user-list $ keystone --os-tokentoken
--os-endpointendpoint
tenant-create --name demo # Using OS_USERNAME, OS_PASSWORD, and OS_TENANT_NAME environment variables $ export OS_USERNAME=admin $ export OS_PASSWORD=secrete $ export OS_TENANT_NAME=admin $ keystone user-list $ keystone tenant-create --name demo # Using tenant_id parameter $ keystone user-list --tenant_idid
# Using --name, --description, and --enabled parameters $ keystone tenant-create --name demo --description "demo tenant" --enabled true
For information about using the keystone client commands to create and manage users, roles, and projects, see the OpenStack Admin User Guide.
The main components of Identity user management are:
User. Represents a human user. Has associated information such as user name, password, and email. This example creates a user named
alice
:$ keystone user-create --name=alice --pass=mypassword123 [email protected]
Tenant. A project, group, or organization. When you make requests to OpenStack services, you must specify a tenant. For example, if you query the Compute service for a list of running instances, you get a list of all running instances in the tenant that you specified in your query. This example creates a tenant named
acme
:$ keystone tenant-create --name=acme
Note Because the term project was used instead of tenant in earlier versions of OpenStack Compute, some command-line tools use
--project_id
instead of--tenant-id
or--os-tenant-id
to refer to a tenant ID.Role. Captures the operations that a user can perform in a given tenant.
This example creates a role named
compute-user
:$ keystone role-create --name=compute-user
Note Individual services, such as Compute and the Image Service, assign meaning to roles. In the Identity Service, a role is simply a name.
The Identity Service assigns a tenant and a role to a user.
You might assign the compute-user
role to
the alice
user in the
acme
tenant:
$ keystone user-list +--------+---------+-------------------+--------+ | id | enabled | email | name | +--------+---------+-------------------+--------+ | 892585 | True | [email protected] | alice | +--------+---------+-------------------+--------+
$ keystone role-list +--------+--------------+ | id | name | +--------+--------------+ | 9a764e | compute-user | +--------+--------------+
$ keystone tenant-list +--------+------+---------+ | id | name | enabled | +--------+------+---------+ | 6b8fd2 | acme | True | +--------+------+---------+
$ keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2
A user can have different roles in different tenants. For
example, Alice might also have the admin
role in the Cyberdyne
tenant. A user can
also have multiple roles in the same tenant.
The
/etc/
file controls the tasks that users can perform for a given
service. For example,
[SERVICE_CODENAME]
/policy.json/etc/nova/policy.json
specifies the
access policy for the Compute service,
/etc/glance/policy.json
specifies the
access policy for the Image Service, and
/etc/keystone/policy.json
specifies
the access policy for the Identity Service.
The default policy.json
files in the
Compute, Identity, and Image Service recognize only the
admin
role: all operations that do not
require the admin
role are accessible by
any user that has any role in a tenant.
If you wish to restrict users from performing operations in,
say, the Compute service, you need to create a role in the
Identity Service and then modify
/etc/nova/policy.json
so that this
role is required for Compute operations.
For example, this line in
/etc/nova/policy.json
specifies that
there are no restrictions on which users can create volumes:
if the user has any role in a tenant, they can create volumes
in that tenant.
"volume:create": [],
To restrict creation of volumes to users who had the
compute-user
role in a particular
tenant, you would add "role:compute-user"
,
like so:
"volume:create": ["role:compute-user"],
To restrict all Compute service requests to require this role, the resulting file would look like:
{ "admin_or_owner":[ [ "role:admin" ], [ "project_id:%(project_id)s" ] ], "default":[ [ "rule:admin_or_owner" ] ], "compute:create":[ "role:compute-user" ], "compute:create:attach_network":[ "role:compute-user" ], "compute:create:attach_volume":[ "role:compute-user" ], "compute:get_all":[ "role:compute-user" ], "compute:unlock_override":[ "rule:admin_api" ], "admin_api":[ [ "role:admin" ] ], "compute_extension:accounts":[ [ "rule:admin_api" ] ], "compute_extension:admin_actions":[ [ "rule:admin_api" ] ], "compute_extension:admin_actions:pause":[ [ "rule:admin_or_owner" ] ], "compute_extension:admin_actions:unpause":[ [ "rule:admin_or_owner" ] ], "compute_extension:admin_actions:suspend":[ [ "rule:admin_or_owner" ] ], "compute_extension:admin_actions:resume":[ [ "rule:admin_or_owner" ] ], "compute_extension:admin_actions:lock":[ [ "rule:admin_or_owner" ] ], "compute_extension:admin_actions:unlock":[ [ "rule:admin_or_owner" ] ], "compute_extension:admin_actions:resetNetwork":[ [ "rule:admin_api" ] ], "compute_extension:admin_actions:injectNetworkInfo":[ [ "rule:admin_api" ] ], "compute_extension:admin_actions:createBackup":[ [ "rule:admin_or_owner" ] ], "compute_extension:admin_actions:migrateLive":[ [ "rule:admin_api" ] ], "compute_extension:admin_actions:migrate":[ [ "rule:admin_api" ] ], "compute_extension:aggregates":[ [ "rule:admin_api" ] ], "compute_extension:certificates":[ "role:compute-user" ], "compute_extension:cloudpipe":[ [ "rule:admin_api" ] ], "compute_extension:console_output":[ "role:compute-user" ], "compute_extension:consoles":[ "role:compute-user" ], "compute_extension:createserverext":[ "role:compute-user" ], "compute_extension:deferred_delete":[ "role:compute-user" ], "compute_extension:disk_config":[ "role:compute-user" ], "compute_extension:evacuate":[ [ "rule:admin_api" ] ], "compute_extension:extended_server_attributes":[ [ "rule:admin_api" ] ], "compute_extension:extended_status":[ "role:compute-user" ], "compute_extension:flavorextradata":[ "role:compute-user" ], "compute_extension:flavorextraspecs":[ "role:compute-user" ], "compute_extension:flavormanage":[ [ "rule:admin_api" ] ], "compute_extension:floating_ip_dns":[ "role:compute-user" ], "compute_extension:floating_ip_pools":[ "role:compute-user" ], "compute_extension:floating_ips":[ "role:compute-user" ], "compute_extension:hosts":[ [ "rule:admin_api" ] ], "compute_extension:keypairs":[ "role:compute-user" ], "compute_extension:multinic":[ "role:compute-user" ], "compute_extension:networks":[ [ "rule:admin_api" ] ], "compute_extension:quotas":[ "role:compute-user" ], "compute_extension:rescue":[ "role:compute-user" ], "compute_extension:security_groups":[ "role:compute-user" ], "compute_extension:server_action_list":[ [ "rule:admin_api" ] ], "compute_extension:server_diagnostics":[ [ "rule:admin_api" ] ], "compute_extension:simple_tenant_usage:show":[ [ "rule:admin_or_owner" ] ], "compute_extension:simple_tenant_usage:list":[ [ "rule:admin_api" ] ], "compute_extension:users":[ [ "rule:admin_api" ] ], "compute_extension:virtual_interfaces":[ "role:compute-user" ], "compute_extension:virtual_storage_arrays":[ "role:compute-user" ], "compute_extension:volumes":[ "role:compute-user" ], "compute_extension:volume_attachments:index":[ "role:compute-user" ], "compute_extension:volume_attachments:show":[ "role:compute-user" ], "compute_extension:volume_attachments:create":[ "role:compute-user" ], "compute_extension:volume_attachments:delete":[ "role:compute-user" ], "compute_extension:volumetypes":[ "role:compute-user" ], "volume:create":[ "role:compute-user" ], "volume:get_all":[ "role:compute-user" ], "volume:get_volume_metadata":[ "role:compute-user" ], "volume:get_snapshot":[ "role:compute-user" ], "volume:get_all_snapshots":[ "role:compute-user" ], "network:get_all_networks":[ "role:compute-user" ], "network:get_network":[ "role:compute-user" ], "network:delete_network":[ "role:compute-user" ], "network:disassociate_network":[ "role:compute-user" ], "network:get_vifs_by_instance":[ "role:compute-user" ], "network:allocate_for_instance":[ "role:compute-user" ], "network:deallocate_for_instance":[ "role:compute-user" ], "network:validate_networks":[ "role:compute-user" ], "network:get_instance_uuids_by_ip_filter":[ "role:compute-user" ], "network:get_floating_ip":[ "role:compute-user" ], "network:get_floating_ip_pools":[ "role:compute-user" ], "network:get_floating_ip_by_address":[ "role:compute-user" ], "network:get_floating_ips_by_project":[ "role:compute-user" ], "network:get_floating_ips_by_fixed_address":[ "role:compute-user" ], "network:allocate_floating_ip":[ "role:compute-user" ], "network:deallocate_floating_ip":[ "role:compute-user" ], "network:associate_floating_ip":[ "role:compute-user" ], "network:disassociate_floating_ip":[ "role:compute-user" ], "network:get_fixed_ip":[ "role:compute-user" ], "network:add_fixed_ip_to_instance":[ "role:compute-user" ], "network:remove_fixed_ip_from_instance":[ "role:compute-user" ], "network:add_network_to_project":[ "role:compute-user" ], "network:get_instance_nw_info":[ "role:compute-user" ], "network:get_dns_domains":[ "role:compute-user" ], "network:add_dns_entry":[ "role:compute-user" ], "network:modify_dns_entry":[ "role:compute-user" ], "network:delete_dns_entry":[ "role:compute-user" ], "network:get_dns_entries_by_address":[ "role:compute-user" ], "network:get_dns_entries_by_name":[ "role:compute-user" ], "network:create_private_dns_domain":[ "role:compute-user" ], "network:create_public_dns_domain":[ "role:compute-user" ], "network:delete_dns_domain":[ "role:compute-user" ] }
The glance client is the command-line interface (CLI) for the OpenStack Image Service API and its extensions. This chapter documents glance version 0.12.0.
For help on a specific glance command, enter:
$ glance help COMMAND
usage: glance [--version] [-d] [-v] [--get-schema] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--os-cacert <ca-certificate-file>] [--ca-file OS_CACERT] [--timeout TIMEOUT] [--no-ssl-compression] [-f] [--dry-run] [--ssl] [-H ADDRESS] [-p PORT] [--os-username OS_USERNAME] [-I OS_USERNAME] [--os-password OS_PASSWORD] [-K OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [-T OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [-N OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [-R OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [-A OS_AUTH_TOKEN] [--os-image-url OS_IMAGE_URL] [-U OS_IMAGE_URL] [--os-image-api-version OS_IMAGE_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [-S OS_AUTH_STRATEGY] <subcommand> ...
Subcommands
- add
DEPRECATED! Use image-create instead.
- clear
DEPRECATED!
- delete
DEPRECATED! Use image-delete instead.
- details
DEPRECATED! Use image-list instead.
- image-create
Create a new image.
- image-delete
Delete specified image(s).
- image-download
Download a specific image.
- image-list
List images you can access.
- image-members
DEPRECATED! Use member-list instead.
- image-show
Describe a specific image.
- image-update
Update a specific image.
- index
DEPRECATED! Use image-list instead.
- member-add
DEPRECATED! Use member-create instead.
- member-create
Share a specific image with a tenant.
- member-delete
Remove a shared image from a tenant.
- member-images
DEPRECATED! Use member-list instead.
- member-list
Describe sharing permissions by image or tenant.
- members-replace
DEPRECATED!
- show
DEPRECATED! Use image-show instead.
- update
DEPRECATED! Use image-update instead.
- help
Display help about this program or one of its subcommands.
- --version
show program's version number and exit
- -d, --debug
Defaults to
env[GLANCECLIENT_DEBUG]
- -v, --verbose
Print more verbose output
- --get-schema
Force retrieving the schema used to generate portions of the help text rather than using a cached copy. Ignored with api version 1
- -k, --insecure
Explicitly allow glanceclient to perform "insecure SSL" (https) requests. The server's certificate will not be verified against any certificate authorities. This option should be used with caution.
- --cert-file CERT_FILE
Path of certificate file to use in SSL connection. This file can optionally be prepended with the private key.
- --key-file KEY_FILE
Path of client key to use in SSL connection. This option is not necessary if your key is prepended to your cert file.
- --os-cacert <ca-certificate-file>
Path of CA TLS certificate(s) used to verify the remote server's certificate. Without this option glance looks for the default system CA certificates.
- --ca-file OS_CACERT
DEPRECATED! Use --os-cacert.
- --timeout TIMEOUT
Number of seconds to wait for a response
- --no-ssl-compression
Disable SSL compression when using https.
- -f, --force
Prevent select actions from requesting user confirmation.
- --dry-run
DEPRECATED! Only used for deprecated legacy commands.
- --ssl
DEPRECATED! Send a fully-formed endpoint using --os- image-url instead.
- -H ADDRESS, --host ADDRESS
DEPRECATED! Send a fully-formed endpoint using --os- image-url instead.
- -p PORT, --port PORT
DEPRECATED! Send a fully-formed endpoint using --os- image-url instead.
- --os-username OS_USERNAME
Defaults to
env[OS_USERNAME]
- -I OS_USERNAME
DEPRECATED! Use --os-username.
- --os-password OS_PASSWORD
Defaults to
env[OS_PASSWORD]
- -K OS_PASSWORD
DEPRECATED! Use --os-password.
- --os-tenant-id OS_TENANT_ID
Defaults to
env[OS_TENANT_ID]
- --os-tenant-name OS_TENANT_NAME
Defaults to
env[OS_TENANT_NAME]
- -T OS_TENANT_NAME
DEPRECATED! Use --os-tenant-name.
- --os-auth-url OS_AUTH_URL
Defaults to
env[OS_AUTH_URL]
- -N OS_AUTH_URL
DEPRECATED! Use --os-auth-url.
- --os-region-name OS_REGION_NAME
Defaults to
env[OS_REGION_NAME]
- -R OS_REGION_NAME
DEPRECATED! Use --os-region-name.
- --os-auth-token OS_AUTH_TOKEN
Defaults to
env[OS_AUTH_TOKEN]
- -A OS_AUTH_TOKEN, --auth_token OS_AUTH_TOKEN
DEPRECATED! Use --os-auth-token.
- --os-image-url OS_IMAGE_URL
Defaults to
env[OS_IMAGE_URL]
- -U OS_IMAGE_URL, --url OS_IMAGE_URL
DEPRECATED! Use --os-image-url.
- --os-image-api-version OS_IMAGE_API_VERSION
Defaults to
env[OS_IMAGE_API_VERSION]
or 1- --os-service-type OS_SERVICE_TYPE
Defaults to
env[OS_SERVICE_TYPE]
- --os-endpoint-type OS_ENDPOINT_TYPE
Defaults to
env[OS_ENDPOINT_TYPE]
- -S OS_AUTH_STRATEGY, --os_auth_strategy OS_AUTH_STRATEGY
DEPRECATED! This option is completely ignored.
usage: glance image-create [--id <IMAGE_ID>] [--name <NAME>] [--store <STORE>] [--disk-format <DISK_FORMAT>] [--container-format <CONTAINER_FORMAT>] [--owner <TENANT_ID>] [--size <SIZE>] [--min-disk <DISK_GB>] [--min-ram <DISK_RAM>] [--location <IMAGE_URL>] [--file <FILE>] [--checksum <CHECKSUM>] [--copy-from <IMAGE_URL>] [--is-public {True,False}] [--is-protected {True,False}] [--property <key=value>] [--human-readable] [--progress]
Create a new image.
Optional arguments
- --id <IMAGE_ID> ID
of image to reserve.
- --name <NAME>
Name of image.
- --store <STORE>
Store to upload image to.
- --disk-format <DISK_FORMAT>
Disk format of image. Acceptable formats: ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso.
- --container-format <CONTAINER_FORMAT>
Container format of image. Acceptable formats: ami, ari, aki, bare, and ovf.
- --owner <TENANT_ID>
Tenant who should own image.
- --size <SIZE>
Size of image data (in bytes). Only used with '-- location' and '--copy_from'.
- --min-disk <DISK_GB>
Minimum size of disk needed to boot image (in gigabytes).
- --min-ram <DISK_RAM>
Minimum amount of ram needed to boot image (in megabytes).
- --location <IMAGE_URL>
URL where the data for this image already resides. For example, if the image data is stored in swift, you could specify 'swift://account:[email protected]/container/obj'.
- --file <FILE>
Local file that contains disk image to be uploaded during creation. Alternatively, images can be passed to the client via stdin.
- --checksum <CHECKSUM>
Hash of image data used Glance can use for verification. Provide a md5 checksum here.
- --copy-from <IMAGE_URL>
Similar to '--location' in usage, but this indicates that the Glance server should immediately copy the data and store it in its configured image store.
- --is-public {True,False}
Make image accessible to the public.
- --is-protected {True,False}
Prevent image from being deleted.
- --property <key=value>
Arbitrary property to associate with image. May be used multiple times.
- --human-readable
Print image size in a human-friendly format.
- --progress
Show upload progress bar.
usage: glance image-delete <IMAGE> [<IMAGE> ...]
Delete specified image(s).
Positional arguments
- <IMAGE>
Name or ID of image(s) to delete.
usage: glance image-list [--name <NAME>] [--status <STATUS>] [--container-format <CONTAINER_FORMAT>] [--disk-format <DISK_FORMAT>] [--size-min <SIZE>] [--size-max <SIZE>] [--property-filter <KEY=VALUE>] [--page-size <SIZE>] [--human-readable] [--sort-key {name,status,container_format,disk_format,size,id,created_at,updated_at}] [--sort-dir {asc,desc}] [--is-public {True,False}] [--owner <TENANT_ID>] [--all-tenants]
List images you can access.
Optional arguments
- --name <NAME>
Filter images to those that have this name.
- --status <STATUS>
Filter images to those that have this status.
- --container-format <CONTAINER_FORMAT>
Filter images to those that have this container format. Acceptable formats: ami, ari, aki, bare, and ovf.
- --disk-format <DISK_FORMAT>
Filter images to those that have this disk format. Acceptable formats: ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso.
- --size-min <SIZE>
Filter images to those with a size greater than this.
- --size-max <SIZE>
Filter images to those with a size less than this.
- --property-filter <KEY=VALUE>
Filter images by a user-defined image property.
- --page-size <SIZE>
Number of images to request in each paginated request.
- --human-readable
Print image size in a human-friendly format.
- --sort-key {name,status,container_format,disk_format,size,id,created_at,updated_at}
Sort image list by specified field.
- --sort-dir {asc,desc}
Sort image list in specified direction.
- --is-public {True,False}
Allows the user to select a listing of public or non public images.
- --owner <TENANT_ID>
Display only images owned by this tenant id. Filtering occurs on the client side so may be inefficient. This option is mainly intended for admin use. Use an empty string ('') to list images with no owner. Note: This option overrides the --is-public argument if present. Note: the v2 API supports more efficient server-side owner based filtering.
- --all-tenants
Allows the admin user to list all images irrespective of the image's owner or is_public value.
usage: glance image-show [--human-readable] <IMAGE>
Describe a specific image.
Positional arguments
- <IMAGE>
Name or ID of image to describe.
Optional arguments
- --human-readable
Print image size in a human-friendly format.
usage: glance image-update [--name <NAME>] [--disk-format <DISK_FORMAT>] [--container-format <CONTAINER_FORMAT>] [--owner <TENANT_ID>] [--size <SIZE>] [--min-disk <DISK_GB>] [--min-ram <DISK_RAM>] [--location <IMAGE_URL>] [--file <FILE>] [--checksum <CHECKSUM>] [--copy-from <IMAGE_URL>] [--is-public {True,False}] [--is-protected {True,False}] [--property <key=value>] [--purge-props] [--human-readable] [--progress] <IMAGE>
Update a specific image.
Positional arguments
- <IMAGE>
Name or ID of image to modify.
Optional arguments
- --name <NAME>
Name of image.
- --disk-format <DISK_FORMAT>
Disk format of image. Acceptable formats: ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso.
- --container-format <CONTAINER_FORMAT>
Container format of image. Acceptable formats: ami, ari, aki, bare, and ovf.
- --owner <TENANT_ID>
Tenant who should own image.
- --size <SIZE>
Size of image data (in bytes).
- --min-disk <DISK_GB>
Minimum size of disk needed to boot image (in gigabytes).
- --min-ram <DISK_RAM>
Minimum amount of ram needed to boot image (in megabytes).
- --location <IMAGE_URL>
URL where the data for this image already resides. For example, if the image data is stored in swift, you could specify 'swift://account:[email protected]/container/obj'.
- --file <FILE>
Local file that contains disk image to be uploaded during update. Alternatively, images can be passed to the client via stdin.
- --checksum <CHECKSUM>
Hash of image data used Glance can use for verification.
- --copy-from <IMAGE_URL>
Similar to '--location' in usage, but this indicates that the Glance server should immediately copy the data and store it in its configured image store.
- --is-public {True,False}
Make image accessible to the public.
- --is-protected {True,False}
Prevent image from being deleted.
- --property <key=value>
Arbitrary property to associate with image. May be used multiple times.
- --purge-props
If this flag is present, delete all image properties not explicitly set in the update request. Otherwise, those properties not referenced are preserved.
- --human-readable
Print image size in a human-friendly format.
- --progress
Show upload progress bar.
usage: glance member-create [--can-share] <IMAGE> <TENANT_ID>
Share a specific image with a tenant.
Positional arguments
- <IMAGE>
Image to add member to.
- <TENANT_ID>
Tenant to add as member
Optional arguments
- --can-share
Allow the specified tenant to share this image.
usage: glance member-delete <IMAGE> <TENANT_ID>
Remove a shared image from a tenant.
Positional arguments
- <IMAGE>
Image from which to remove member
- <TENANT_ID>
Tenant to remove as member
To get a list of images and to then get further details about a single image, use glance image-list and glance image-show.
$ glance image-list +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ | 397e713c-b95b-4186-ad46-6126863ea0a9 | cirros-0.3.2-x86_64-uec | ami | ami | 25165824 | active | | df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.2-x86_64-uec-kernel | aki | aki | 4955792 | active | | 3cf852bd-2332-48f4-9ae4-7d926d50945e | cirros-0.3.2-x86_64-uec-ramdisk | ari | ari | 3714968 | active | | 7e5142af-1253-4634-bcc6-89482c5f2e8a | myCirrosImage | ami | ami | 14221312 | active | +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+
$ glance image-show myCirrosImage +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | Property 'base_image_ref' | 397e713c-b95b-4186-ad46-6126863ea0a9 | | Property 'image_location' | snapshot | | Property 'image_state' | available | | Property 'image_type' | snapshot | | Property 'instance_type_ephemeral_gb' | 0 | | Property 'instance_type_flavorid' | 2 | | Property 'instance_type_id' | 5 | | Property 'instance_type_memory_mb' | 2048 | | Property 'instance_type_name' | m1.small | | Property 'instance_type_root_gb' | 20 | | Property 'instance_type_rxtx_factor' | 1 | | Property 'instance_type_swap' | 0 | | Property 'instance_type_vcpu_weight' | None | | Property 'instance_type_vcpus' | 1 | | Property 'instance_uuid' | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | | Property 'kernel_id' | df430cc2-3406-4061-b635-a51c16e488ac | | Property 'owner_id' | 66265572db174a7aa66eba661f58eb9e | | Property 'ramdisk_id' | 3cf852bd-2332-48f4-9ae4-7d926d50945e | | Property 'user_id' | 376744b5910b4b4da7d8e6cb483b06a8 | | checksum | 8e4838effa1969ad591655d6485c7ba8 | | container_format | ami | | created_at | 2013-07-22T19:45:58 | | deleted | False | | disk_format | ami | | id | 7e5142af-1253-4634-bcc6-89482c5f2e8a | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | myCirrosImage | | owner | 66265572db174a7aa66eba661f58eb9e | | protected | False | | size | 14221312 | | status | active | | updated_at | 2013-07-22T19:46:42 | +---------------------------------------+--------------------------------------+
When viewing a list of images, you can also use grep to filter the list, as follows:
$ glance image-list | grep 'cirros' | 397e713c-b95b-4186-ad46-6126863ea0a9 | cirros-0.3.2-x86_64-uec | ami | ami | 25165824 | active | | df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.2-x86_64-uec-kernel | aki | aki | 4955792 | active | | 3cf852bd-2332-48f4-9ae4-7d926d50945e | cirros-0.3.2-x86_64-uec-ramdisk | ari | ari | 3714968 | active |
Note | |
---|---|
To store location metadata for images, which enables direct file
access for a client, update the
After you restart the Image Service, you can use the following syntax to view the image's location information: $ glance --os-image-api-version=2 image-show For example, using the image ID shown above, you would issue the command as follows: $ glance --os-image-api-version=2 image-show 2d9bb53f-70ea-4066-a68b-67960eaae673 |
To create an image, use glance image-create:
$ glance image-create imageName
To update an image by name or ID, use glance image-update:
$ glance image-update imageName
The following table lists the optional arguments that you can use with the create and update commands to modify image properties. For more information, refer to Image Service chapter in the OpenStack Command-Line Interface Reference.
|
The name of the image. |
|
The disk format of the image. Acceptable formats are ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso. |
|
The container format of the image. Acceptable formats are ami, ari, aki, bare, and ovf. |
|
The tenant who should own the image. |
|
The size of image data, in bytes. |
|
The minimum size of the disk needed to boot the image, in gigabytes. |
|
The minimum amount of RAM needed to boot the image, in megabytes. |
|
The URL where the data for this image resides. For example, if the image
data is stored in swift, you could specify
|
|
Local file that contains the disk image to be uploaded during the update. Alternatively, you can pass images to the client through stdin. |
|
Hash of image data to use for verification. |
|
Similar to |
|
Makes an image accessible for all the tenants. |
|
Prevents an image from being deleted. |
|
Arbitrary property to associate with image. This option can be used multiple times. |
|
Deletes all image properties that are not explicitly set in the update request. Otherwise, those properties not referenced are preserved. |
|
Prints the image size in a human-friendly format. |
The following example shows the command that you would use to upload a CentOS 6.3 image in qcow2 format and configure it for public access:
$ glance image-create --name centos63-image --disk-format=qcow2 \ --container-format=bare --is-public=True --file=./centos63.qcow2
The following example shows how to update an existing image with a properties that describe the disk bus, the CD-ROM bus, and the VIF model:
$ glance image-update \ --property hw_disk_bus=scsi \ --property hw_cdrom_bus=ide \ --property hw_vif_model=e1000 \ f16-x86_64-openstack-sda
Currently the libvirt virtualization tool determines the disk, CD-ROM, and VIF
device models based on the configured hypervisor type (libvirt_type
in /etc/nova/nova.conf
). For the sake of optimal performance, libvirt
defaults to using virtio for both disk and VIF (NIC) models. The disadvantage of this
approach is that it is not possible to run operating systems that lack virtio drivers,
for example, BSD, Solaris, and older versions of Linux and Windows.
If you specify a disk or CD-ROM bus model that is not supported, see Table 3.1, “Disk and CD-ROM bus model values”. If you specify a VIF model that is not supported, the instance fails to launch. See Table 3.2, “VIF model values”.
The valid model values depend on the libvirt_type
setting, as shown
in the following tables.
libvirt_type setting | Supported model values |
---|---|
qemu or kvm |
|
xen |
|
libvirt_type setting | Supported model values |
---|---|
qemu or kvm |
|
xen |
|
vmware |
|
You can use the nova client to take a snapshot of a running instance to create an image.
To minimize the potential for data loss and ensure that you create an accurate image, you should shut down the instance before you take a snapshot.
You cannot create a snapshot from an instance that has an attached volume. Detach the volume, create the image, and remount the volume.
Write any buffered data to disk.
For more information, see Taking Snapshots in the OpenStack Operations Guide.
List instances to get the server name:
$ nova list +--------------------------------------+----------------------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------+--------+------------+-------------+------------------+ | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer | ACTIVE | None | Running | private=10.0.0.3 | +--------------------------------------+----------------------+--------+------------+-------------+------------------+
In this example, the server is named
myCirrosServer
.Use this server to create a snapshot:
$ nova image-create myCirrosServer myCirrosImage
The command creates a qemu snapshot and automatically uploads the image to your repository. Only the tenant that creates the image has access to it.
Get details for your image to check its status:
$ nova image-show myCirrosImage +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | metadata owner_id | 66265572db174a7aa66eba661f58eb9e | | minDisk | 0 | | metadata instance_type_name | m1.small | | metadata instance_type_id | 5 | | metadata instance_type_memory_mb | 2048 | | id | 7e5142af-1253-4634-bcc6-89482c5f2e8a | | metadata instance_type_root_gb | 20 | | metadata instance_type_rxtx_factor | 1 | | metadata ramdisk_id | 3cf852bd-2332-48f4-9ae4-7d926d50945e | | metadata image_state | available | | metadata image_location | snapshot | | minRam | 0 | | metadata instance_type_vcpus | 1 | | status | ACTIVE | | updated | 2013-07-22T19:46:42Z | | metadata instance_type_swap | 0 | | metadata instance_type_vcpu_weight | None | | metadata base_image_ref | 397e713c-b95b-4186-ad46-6126863ea0a9 | | progress | 100 | | metadata instance_type_flavorid | 2 | | OS-EXT-IMG-SIZE:size | 14221312 | | metadata image_type | snapshot | | metadata user_id | 376744b5910b4b4da7d8e6cb483b06a8 | | name | myCirrosImage | | created | 2013-07-22T19:45:58Z | | metadata instance_uuid | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | | server | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | | metadata kernel_id | df430cc2-3406-4061-b635-a51c16e488ac | | metadata instance_type_ephemeral_gb | 0 | +-------------------------------------+--------------------------------------+
The image status changes from
SAVING
toACTIVE
. Only the tenant who creates the image has access to it.
To launch an instance from your image, include the image ID and flavor ID, as in the following example:
$ nova boot newServer --image 7e5142af-1253-4634-bcc6-89482c5f2e8a \ --flavor 3 +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-EXT-STS:task_state | scheduling | | image | myCirrosImage | | OS-EXT-STS:vm_state | building | | OS-EXT-SRV-ATTR:instance_name | instance-00000007 | | flavor | m1.medium | | id | d7efd3e4-d375-46d1-9d57-372b6e4bdb7f | | security_groups | [{u'name': u'default'}] | | user_id | 376744b5910b4b4da7d8e6cb483b06a8 | | OS-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 0 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | | status | BUILD | | updated | 2013-07-22T19:58:33Z | | hostId | | | OS-EXT-SRV-ATTR:host | None | | key_name | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | name | newServer | | adminPass | jis88nN46RGP | | tenant_id | 66265572db174a7aa66eba661f58eb9e | | created | 2013-07-22T19:58:33Z | | metadata | {} | +-------------------------------------+--------------------------------------+
OpenStack projects use AMQP, an open standard for messaging middleware. OpenStack services that run on multiple servers to talk to each other. OpenStack Oslo RPC supports three implementations of AMQP: RabbitMQ, Qpid, and ZeroMQ.
OpenStack Oslo RPC uses RabbitMQ
by default. Use these options to configure the
RabbitMQ message system. The
rpc_backend
option is not required as long
as RabbitMQ is the default messaging
system. However, if it is included the configuration, you must
set it to
nova.openstack.common.rpc.impl_kombu
.
rpc_backend=nova.openstack.common.rpc.impl_kombu
You can use these additional options to configure the
RabbitMQ messaging system. You can
configure messaging communication for different installation
scenarios, tune retries for RabbitMQ, and define the size of the
RPC thread pool. To monitor notifications through RabbitMQ, you
must set the notification_driver
option to
nova.notifier.rabbit_notifier
in the
nova.conf
file. The default for sending
usage data is sixty seconds plus a random number of seconds from
zero to sixty.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
rabbit_ha_queues = False | (BoolOpt) Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. |
rabbit_host = localhost | (StrOpt) The RabbitMQ broker address where a single node is used. |
rabbit_hosts = $rabbit_host:$rabbit_port | (ListOpt) RabbitMQ HA cluster host:port pairs. |
rabbit_login_method = AMQPLAIN | (StrOpt) the RabbitMQ login method |
rabbit_max_retries = 0 | (IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count). |
rabbit_password = guest | (StrOpt) The RabbitMQ password. |
rabbit_port = 5672 | (IntOpt) The RabbitMQ broker port where a single node is used. |
rabbit_retry_backoff = 2 | (IntOpt) How long to backoff for between retries when connecting to RabbitMQ. |
rabbit_retry_interval = 1 | (IntOpt) How frequently to retry connecting with RabbitMQ. |
rabbit_use_ssl = False | (BoolOpt) Connect over SSL for RabbitMQ. |
rabbit_userid = guest | (StrOpt) The RabbitMQ userid. |
rabbit_virtual_host = / | (StrOpt) The RabbitMQ virtual host. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
kombu_reconnect_delay = 1.0 | (FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification. |
kombu_ssl_ca_certs = | (StrOpt) SSL certification authority file (valid only if SSL enabled). |
kombu_ssl_certfile = | (StrOpt) SSL cert file (valid only if SSL enabled). |
kombu_ssl_keyfile = | (StrOpt) SSL key file (valid only if SSL enabled). |
kombu_ssl_version = | (StrOpt) SSL version to use (valid only if SSL enabled). valid values are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some distributions. |
Use these options to configure the
Qpid messaging system for OpenStack
Oslo RPC. Qpid is not the default
messaging system, so you must enable it by setting the
rpc_backend
option in the
nova.conf
file.
rpc_backend=nova.openstack.common.rpc.impl_qpid
This critical option points the compute nodes to the
Qpid broker (server). Set
qpid_hostname
to the host name where the
broker runs in the nova.conf
file.
Note | |
---|---|
The |
qpid_hostname=hostname.example.com
If the Qpid broker listens on a
port other than the AMQP default of 5672
, you
must set the qpid_port
option to that
value:
qpid_port=12345
If you configure the Qpid broker to require authentication, you must add a user name and password to the configuration:
qpid_username=username qpid_password=password
By default, TCP is used as the transport. To enable SSL, set
the qpid_protocol
option:
qpid_protocol=ssl
This table lists additional options that you use to configure the Qpid messaging driver for OpenStack Oslo RPC. These options are used infrequently.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
qpid_heartbeat = 60 | (IntOpt) Seconds between connection keepalive heartbeats. |
qpid_hostname = localhost | (StrOpt) Qpid broker hostname. |
qpid_hosts = $qpid_hostname:$qpid_port | (ListOpt) Qpid HA cluster host:port pairs. |
qpid_password = | (StrOpt) Password for Qpid connection. |
qpid_port = 5672 | (IntOpt) Qpid broker port. |
qpid_protocol = tcp | (StrOpt) Transport to use, either 'tcp' or 'ssl'. |
qpid_sasl_mechanisms = | (StrOpt) Space separated list of SASL mechanisms to use for auth. |
qpid_tcp_nodelay = True | (BoolOpt) Whether to disable the Nagle algorithm. |
qpid_topology_version = 1 | (IntOpt) The qpid topology version to use. Version 1 is what was originally used by impl_qpid. Version 2 includes some backwards-incompatible changes that allow broker federation to work. Users should update to version 2 when they are able to take everything down, as it requires a clean break. |
qpid_username = | (StrOpt) Username for Qpid connection. |
Use these options to configure the
ZeroMQ messaging system for
OpenStack Oslo RPC. ZeroMQ is not the
default messaging system, so you must enable it by setting the
rpc_backend
option in the
nova.conf
file.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
rpc_zmq_bind_address = * | (StrOpt) ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. The "host" option should point or resolve to this address. |
rpc_zmq_contexts = 1 | (IntOpt) Number of ZeroMQ contexts, defaults to 1. |
rpc_zmq_host = oslo | (StrOpt) Name of this node. Must be a valid hostname, FQDN, or IP address. Must match "host" option, if running Nova. |
rpc_zmq_ipc_dir = /var/run/openstack | (StrOpt) Directory for holding IPC sockets. |
rpc_zmq_matchmaker = oslo.messaging._drivers.matchmaker.MatchMakerLocalhost | (StrOpt) MatchMaker driver. |
rpc_zmq_port = 9501 | (IntOpt) ZeroMQ receiver listening port. |
rpc_zmq_topic_backlog = None | (IntOpt) Maximum number of ingress messages to locally buffer per topic. Default is unlimited. |
Use these options to configure the RabbitMQ and Qpid messaging drivers.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
amqp_auto_delete = False | (BoolOpt) Auto-delete queues in amqp. |
amqp_durable_queues = False | (BoolOpt) Use durable queues in amqp. |
control_exchange = openstack | (StrOpt) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. |
matchmaker_heartbeat_freq = 300 | (IntOpt) Heartbeat frequency. |
matchmaker_heartbeat_ttl = 600 | (IntOpt) Heartbeat time-to-live. |
rpc_backend = rabbit | (StrOpt) The messaging driver to use, defaults to rabbit. Other drivers include qpid and zmq. |
rpc_cast_timeout = 30 | (IntOpt) Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. |
rpc_conn_pool_size = 30 | (IntOpt) Size of RPC connection pool. |
rpc_response_timeout = 60 | (IntOpt) Seconds to wait for a response from a call. |
rpc_thread_pool_size = 64 | (IntOpt) Size of RPC greenthread pool. |
[cells] | |
rpc_driver_queue_base = cells.intercell | (StrOpt) Base queue name to use when communicating between cells. Various topics by message type will be appended to this. |
[matchmaker_ring] | |
ringfile = /etc/oslo/matchmaker_ring.json | (StrOpt) Matchmaker ring file (JSON). |
[upgrade_levels] | |
baseapi = None | (StrOpt) Set a version cap for messages sent to the base api in any service |