This section offers a brief overview of each concept in networking for Compute. With the Grizzly release, you can chose either to install and configure nova-network for networking between VMs or use the Networking service (quantum) for networking. Refer to the Network Administration Guide to configure Compute networking options with Quantum.
For each VM instance, Compute assigns to it a private IP address. (Currently, Compute with nova-network only supports Linux bridge networking that allows the virtual interfaces to connect to the outside network through the physical interface.)
The network controller with nova-network provides virtual networks to enable compute servers to interact with each other and with the public network.
Currently, Compute with nova-network supports three kinds of networks, implemented in three “Network Manager” types:
Flat Network Manager
Flat DHCP Network Manager
VLAN Network Manager
The three kinds of networks can co-exist in a cloud system. However, since you can't yet select the type of network for a given project, you cannot configure more than one type of network in a given Compute installation.
Note | |
---|---|
All of the networking options require network connectivity to be already set up between OpenStack physical nodes. OpenStack will not configure any physical network interfaces. OpenStack will automatically create all network bridges (i.e., br100) and VM virtual interfaces. All machines must have a public and internal network interface (controlled by the options:
The internal network interface is used for communication with VMs, it shouldn't have an IP address attached to it before OpenStack installation (it serves merely as a fabric where the actual endpoints are VMs and dnsmasq). Also, the internal network interface must be put in promiscuous mode, because it will have to receive packets whose target MAC address is of the guest VM, not of the host. |
All the network managers configure the network using network
drivers, e.g. the linux L3 driver (l3.py
and
linux_net.py
) which makes use of iptables
,
route
and other network management facilities, and also of
libvirt's network filtering
facilities. The driver isn't tied to any particular network manager; all
network managers use the same driver. The driver usually initializes (creates bridges
etc.) only when the first VM lands on this host node.
All network managers operate in either single-host
or multi-host mode. This choice greatly influences
the network configuration. In single-host mode, there is just 1 instance of
nova-network
which is used as a default gateway for VMs and hosts
a single DHCP server (dnsmasq), whereas in multi-host mode every compute node has its
own nova-network
. In any case, all traffic between VMs and the outer
world flows through nova-network
. There are pros and cons to both
modes, read more in Existing High
Availability Options.
Compute makes a distinction between fixed IPs and floating IPs for VM instances. Fixed IPs are IP addresses that are assigned to an instance on creation and stay the same until the instance is explicitly terminated. By contrast, floating IPs are addresses that can be dynamically associated with an instance. A floating IP address can be disassociated and associated with another instance at any time. A user can reserve a floating IP for their project.
In Flat Mode, a network administrator specifies a
subnet. The IP addresses for VM instances are grabbed from the subnet, and then injected
into the image on launch. Each instance receives a fixed IP address from the pool of
available addresses.
A system administrator may create the Linux networking bridge
(typically named br100
, although this configurable) on the
systems running the nova-network
service. All instances of the
system are attached to the same bridge, configured manually by the network administrator.
Note | |
---|---|
The configuration injection currently only works on Linux-style systems that
keep networking configuration in
|
In Flat DHCP Mode, OpenStack starts a DHCP server (dnsmasq) to pass out IP addresses to VM instances from the specified subnet in addition to manually configuring the networking bridge. IP addresses for VM instances are grabbed from a subnet specified by the network administrator.
Like Flat Mode, all instances are attached to a single bridge on the compute node. In
addition a DHCP server is running to configure instances (depending on
single-/multi-host mode, alongside each nova-network
). In this mode,
Compute does a bit more configuration in that it attempts to bridge into an ethernet
device (flat_interface
, eth0 by default). It will also run and
configure dnsmasq as a DHCP server listening on this bridge, usually on IP address
10.0.0.1 (see DHCP server: dnsmasq). For every instance,
nova will allocate a fixed IP address and configure dnsmasq with the MAC/IP pair for the
VM, i.e. dnsmasq doesn't take part in the IP address allocation process, it only hands
out IPs according to the mapping done by nova. Instances receive their fixed IPs by
doing a dhcpdiscover. These IPs are not assigned to
any of the host's network interfaces, only to the VM's guest-side interface.
In any setup with flat networking, the host(-s) with
nova-network on it is (are) responsible for forwarding
traffic from the private network. Compute can determine
the NAT entries for each network when you have
fixed_range=''
in your
nova.conf
. Sometimes NAT is not
used, such as when fixed_range is configured with all
public IPs and a hardware router is used (one of the HA
options). Such host(-s) needs to have
br100
configured and physically
connected to any other nodes that are hosting VMs. You
must set the flat_network_bridge
option
or create networks with the bridge parameter in order to
avoid raising an error. Compute nodes have
iptables/ebtables entries created per project and instance
to protect against IP/MAC address spoofing and ARP
poisoning.
Note | |
---|---|
To use the new dynamic
|
Note | |
---|---|
In single-host Flat DHCP mode you will be able to ping VMs via their fixed IP from the nova-network node, but you will not be able to ping them from the compute nodes. This is expected behavior. |
VLAN Network Mode is the default mode for OpenStack Compute. In this mode, Compute creates a VLAN and bridge for each project. For multiple machine installation, the VLAN Network Mode requires a switch that supports VLAN tagging (IEEE 802.1Q). The project gets a range of private IPs that are only accessible from inside the VLAN. In order for a user to access the instances in their project, a special VPN instance (code named cloudpipe) needs to be created. Compute generates a certificate and key for the user to access the VPN and starts the VPN automatically. It provides a private network segment for each project's instances that can be accessed via a dedicated VPN connection from the Internet. In this mode, each project gets its own VLAN, Linux networking bridge, and subnet.
The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. OpenStack Compute creates the Linux networking bridges and VLANs when required.
Note | |
---|---|
With the default Compute settings, once a virtual machine instance is destroyed, it can take some time for the IP address associated with the destroyed instance to become available for assignment to a new instance. The This configuration option applies to both Flat DHCP mode and VLAN Manager mode. Use of this option requires the dhcp_release program. Verify
that this program is installed on all hosts running the # which dhcp_release /usr/bin/dhcp_release |