- Setting Configuration Options in the
nova.conf
File - Setting Up OpenStack Compute Environment on the Compute Node
- Creating Credentials
- Creating Certificates
- Creating networks
- Enabling Access to VMs on the Compute Node
- Configuring Multiple Compute Nodes
- Determining the Version of Compute
- Diagnose your compute nodes
Configuring your Compute installation involves many
configuration files - the nova.conf
file,
the api-paste.ini
file, and related Image and Identity
management configuration files. This section contains the basics
for a simple multi-node installation, but Compute can be
configured many ways. You can find networking options and
hypervisor options described in separate chapters.
The configuration file nova.conf
is
installed in /etc/nova
by default. A
default set of options are already configured in
nova.conf
when you install manually.
Starting with the default file, you must define the
following required items in
/etc/nova/nova.conf
. The options are
described below. You can place comments in the
nova.conf
file by entering a new line
with a #
sign at the beginning of the line.
To see a listing of all possible configuration options, refer
to the Compute Options Reference.
Here is a simple example nova.conf
file for a small private cloud, with all the cloud controller
services, database server, and messaging server on the same
server. In this case, CONTROLLER_IP represents the IP address
of a central server, BRIDGE_INTERFACE represents the bridge
such as br100, the NETWORK_INTERFACE represents an interface
to your VLAN setup, and passwords are represented as
DB_PASSWORD_COMPUTE for your Compute (nova) database password,
and RABBIT PASSWORD represents the password to your rabbit
installation.
[DEFAULT] # LOGS/STATE verbose=True logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova rootwrap_config=/etc/nova/rootwrap.conf # SCHEDULER compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler # VOLUMES volume_api_class=nova.volume.cinder.API volume_driver=nova.volume.driver.ISCSIDriver volume_group=cinder-volumes volume_name_template=volume-%s iscsi_helper=tgtadm # DATABASE sql_connection=mysql://nova:[email protected]/nova # COMPUTE libvirt_type=qemu compute_driver=libvirt.LibvirtDriver instance_name_template=instance-%08x api_paste_config=/etc/nova/api-paste.ini # COMPUTE/APIS: if you have separate configs for separate services # this flag is required for both nova-api and nova-compute allow_resize_to_same_host=True # APIS osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions ec2_dmz_host=192.168.206.130 s3_host=192.168.206.130 enabled_apis=ec2,osapi_compute,metadata # RABBITMQ rabbit_host=192.168.206.130 # GLANCE image_service=nova.image.glance.GlanceImageService glance_api_servers=192.168.206.130:9292 # NETWORK network_manager=nova.network.manager.FlatDHCPManager force_dhcp_release=True dhcpbridge_flagfile=/etc/nova/nova.conf firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver # Change my_ip to match each host my_ip=192.168.206.130 public_interface=eth0 vlan_interface=eth0 flat_network_bridge=br100 flat_interface=eth0 fixed_range='' # NOVNC CONSOLE novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html # Change vncserver_proxyclient_address and vncserver_listen to match each compute host vncserver_proxyclient_address=192.168.206.130 vncserver_listen=192.168.206.130 # AUTHENTICATION auth_strategy=keystone [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova signing_dirname = /tmp/keystone-signing-nova
Note | |
---|---|
If your OpenStack deployment uses Qpid as the message queue instead
of RabbitMQ (e.g., on Fedora, CentOS, RHEL), you would see
|
Create a nova
group, so you can set permissions on the
configuration file:
$ sudo addgroup nova
The nova.conf
file should have its
owner set to root:nova
, and mode set to
0640
, since the file could contain your
MySQL server’s username and password. You also want to ensure
that the nova
user belongs to the
nova
group.
$ sudo usermod -g nova nova $ chown -R username:nova /etc/nova $ chmod 640 /etc/nova/nova.conf
These are the commands you run to ensure the database schema is current:
$ nova-manage db sync
The credentials you will use to launch
instances, bundle images, and all the other assorted
API functions can be sourced in a single file, such as
creating one called /creds/openrc
.
Here's an example openrc
file you can download from
the Dashboard in Settings > Project Settings >
Download RC File.
#!/bin/bash # *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We # will use the 1.1 *compute api* export OS_AUTH_URL=http://50.56.12.206:5000/v2.0 export OS_TENANT_ID=27755fd279ce43f9b17ad2d65d45b75c export OS_USERNAME=vish export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_AUTH_USER=norm export OS_AUTH_KEY=$OS_PASSWORD_INPUT export OS_AUTH_TENANT=27755fd279ce43f9b17ad2d65d45b75c export OS_AUTH_STRATEGY=keystone
You also may want to enable EC2 access for the
euca2ools. Here is an example ec2rc
file for enabling
EC2 access with the required credentials.
export NOVA_KEY_DIR=/root/creds/ export EC2_ACCESS_KEY="EC2KEY:USER" export EC2_SECRET_KEY="SECRET_KEY" export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud" export S3_URL="http://$NOVA-API-IP:3333" export EC2_USER_ID=42 # nova does not use user id, but bundling requires it export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem export EC2_CERT=${NOVA_KEY_DIR}/cert.pem export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}" alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
Lastly, here is an example openrc file that works with nova client and ec2 tools.
export OS_PASSWORD=${ADMIN_PASSWORD:-secrete} export OS_AUTH_URL=${OS_AUTH_URL:-http://$SERVICE_HOST:5000/v2.0} export NOVA_VERSION=${NOVA_VERSION:-1.1} export OS_REGION_NAME=${OS_REGION_NAME:-RegionOne} export EC2_URL=${EC2_URL:-http://$SERVICE_HOST:8773/services/Cloud} export EC2_ACCESS_KEY=${DEMO_ACCESS} export EC2_SECRET_KEY=${DEMO_SECRET} export S3_URL=http://$SERVICE_HOST:3333 export EC2_USER_ID=42 # nova does not use user id, but bundling requires it export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem export EC2_CERT=${NOVA_KEY_DIR}/cert.pem export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
Next, add these credentials to your environment prior to running any nova client commands or nova commands.
$ cat /root/creds/openrc >> ~/.bashrc source ~/.bashrc
You can create certificates contained within pem files using these nova client commands, ensuring you have set up your environment variables for the nova client:
# nova x509-get-root-cert # nova x509-create-cert
You need to populate the database with the network configuration information that
Compute obtains from the nova.conf
file. You can find out more about
the nova network-create command with nova help network-create
.
Here is an example of what this looks like with real values entered. This example would be appropriate for FlatDHCP mode, for VLAN Manager mode you would also need to specify a VLAN.
$ nova network-create novanet --fixed-range-v4 192.168.0.0/24
For this example, the number of IPs is /24
since that falls inside
the /16
range that was set in fixed-range
in
nova.conf
. Currently, there can only be one network, and this set up
would use the max IPs available in a /24
. You can choose values that let
you use any valid amount that you would like.
OpenStack Compute assumes that the first IP address is your network (like
192.168.0.0
), that the 2nd IP is your gateway
(192.168.0.1
), and that the broadcast is the very last IP in the range
you defined (192.168.0.255
). You can alter the gateway using the
--gateway
flag when invoking nova network-create.
You are unlikely to need to modify the network or broadcast addresseses, but if you do, you
will need to manually edit the networks
table in the database.
One of the most commonly missed configuration areas is not allowing the proper access to VMs. Use nova client commands to enable access. Below, you will find the commands to allow ping and ssh to your VMs :
Note | |
---|---|
These commands need to be run as root only if the credentials
used to interact with nova-api have been put under
|
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
Another common issue is you cannot ping or SSH to your instances after issuing the euca-authorize commands. Something to look at is the amount of dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following:
$ sudo killall dnsmasq $ sudo service nova-network restart
If you get the instance not found
message while
performing the restart, that means the service was not previously
running. You simply need to start it instead of restarting it:
$ sudo service nova-network start
If your goal is to split your VM load across more than one server, you can connect an additional nova-compute node to a cloud controller node. This configuring can be reproduced on multiple compute servers to start building a true multi-node OpenStack Compute cluster.
To build out and scale the Compute platform, you spread out services amongst many servers. While there are additional ways to accomplish the build-out, this section describes adding compute nodes, and the service we are scaling out is called nova-compute.
For a multi-node install you only make changes to
nova.conf
and copy it to additional
compute nodes. Ensure each nova.conf
file
points to the correct IP addresses for the respective
services.
By default, Nova sets the bridge device based on the
setting in flat_network_bridge
. Now you can
edit /etc/network/interfaces
with the
following template, updated with your IP information.
# The loopback network interface auto lo iface lo inet loopback # The primary network interface auto br100 iface br100 inet static bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 addressxxx.xxx.xxx.xxx
netmaskxxx.xxx.xxx.xxx
networkxxx.xxx.xxx.xxx
broadcastxxx.xxx.xxx.xxx
gatewayxxx.xxx.xxx.xxx
# dns-* options are implemented by the resolvconf package, if installed dns-nameserversxxx.xxx.xxx.xxx
Restart networking:
$ sudo service networking restart
With nova.conf
updated and networking set,
configuration is nearly complete. First, bounce the relevant services to
take the latest updates:
$ sudo service libvirtd restart $ sudo service nova-compute restart
To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally:
# chgrp kvm /dev/kvm # chmod g+rwx /dev/kvm
If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into delays with booting. Any server that does not have nova-api running on it needs this iptables entry so that UEC images can get metadata info. On compute nodes, configure the iptables with this next step:
# iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP
:8773
Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query:
$ mysql -u$MYSQL_USER
-p$MYSQL_PASS
nova -e 'select * from services;'
In return, you should see something similar to this:
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova | | 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova | | 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova | | 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova | | 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova | | 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
You can see that osdemo0{1,2,4,5}
are all
running nova-compute. When you start spinning up
instances, they will allocate on any node that is running
nova-compute from this list.
You can find the version of the installation by using the nova-manage command:
$ nova-manage version list
You can obtain extra informations about the running virtual machines: their CPU usage, the memory, the disk IO or network IO, per instance, by running the nova diagnostics command with a server ID:
$ nova diagnostics <serverID>
The output of this command will vary depending on the hypervisor. Example output when the hypervisor is Xen:
+----------------+-----------------+ | Property | Value | +----------------+-----------------+ | cpu0 | 4.3627 | | memory | 1171088064.0000 | | memory_target | 1171088064.0000 | | vbd_xvda_read | 0.0 | | vbd_xvda_write | 0.0 | | vif_0_rx | 3223.6870 | | vif_0_tx | 0.0 | | vif_1_rx | 104.4955 | | vif_1_tx | 0.0 | +----------------+-----------------+
While the command should work with any hypervisor that is controlled through libvirt (e.g., KVM, QEMU, LXC), it has only been tested with KVM. Example output when the hypervisor is KVM:
+------------------+------------+ | Property | Value | +------------------+------------+ | cpu0_time | 2870000000 | | memory | 524288 | | vda_errors | -1 | | vda_read | 262144 | | vda_read_req | 112 | | vda_write | 5606400 | | vda_write_req | 376 | | vnet0_rx | 63343 | | vnet0_rx_drop | 0 | | vnet0_rx_errors | 0 | | vnet0_rx_packets | 431 | | vnet0_tx | 4905 | | vnet0_tx_drop | 0 | | vnet0_tx_errors | 0 | | vnet0_tx_packets | 45 | +------------------+------------+