Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.
Edit
/etc/sysctl.conf
to contain the following:net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
Implement the changes:
# sysctl -p
To install the Networking components
# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms \ neutron-l3-agent neutron-dhcp-agent
Note Ubuntu installations using Linux kernel version 3.11 or newer do not require the openvswitch-datapath-dkms package.
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
Configure Networking to use the Identity service for authentication:
Edit the
/etc/neutron/neutron.conf
file and add the following key to the[DEFAULT]
section:[DEFAULT] ... auth_strategy = keystone
Add the following keys to the
[keystone_authtoken]
section:Replace
NEUTRON_PASS
with the password you chose for theneutron
user in the Identity service.[keystone_authtoken] ... auth_uri = http://
controller
:5000 auth_host =controller
auth_protocol = http auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password =NEUTRON_PASS
Configure Networking to use the message broker:
Edit the
/etc/neutron/neutron.conf
file and add the following keys to the[DEFAULT]
section:Replace
RABBIT_PASS
with the password you chose for theguest
account in RabbitMQ.[DEFAULT] ... rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host =
controller
rabbit_password =RABBIT_PASS
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
Edit the
/etc/neutron/neutron.conf
file and add the following keys to the[DEFAULT]
section:[DEFAULT] ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = True
Note We recommend adding
verbose = True
to the[DEFAULT]
section in/etc/neutron/neutron.conf
to assist with troubleshooting.
Comment out any lines in the
[service_providers]
section.
To configure the Layer-3 (L3) agent
The Layer-3 (L3) agent provides routing services for instance virtual networks.
Edit the
/etc/neutron/l3_agent.ini
file and add the following keys to the[DEFAULT]
section:[DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True
Note We recommend adding
verbose = True
to the[DEFAULT]
section in/etc/neutron/l3_agent.ini
to assist with troubleshooting.
To configure the DHCP agent
The DHCP agent provides DHCP services for instance virtual networks.
Edit the
/etc/neutron/dhcp_agent.ini
file and add the following keys to the[DEFAULT]
section:[DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True
Note We recommend adding
verbose = True
to the[DEFAULT]
section in/etc/neutron/dhcp_agent.ini
to assist with troubleshooting.
To configure the metadata agent
The metadata agent provides configuration information such as credentials for remote access to instances.
Edit the
/etc/neutron/metadata_agent.ini
file and add the following keys to the[DEFAULT]
section:Replace
NEUTRON_PASS
with the password you chose for theneutron
user in the Identity service. ReplaceMETADATA_SECRET
with a suitable secret for the metadata proxy.[DEFAULT] ... auth_url = http://
controller
:5000/v2.0 auth_region = regionOne admin_tenant_name = service admin_user = neutron admin_password =NEUTRON_PASS
nova_metadata_ip =controller
metadata_proxy_shared_secret =METADATA_SECRET
Note We recommend adding
verbose = True
to the[DEFAULT]
section in/etc/neutron/metadata_agent.ini
to assist with troubleshooting.Note Perform the next two steps on the controller node.
On the controller node, edit the
/etc/nova/nova.conf
file and add the following keys to the[DEFAULT]
section:Replace
METADATA_SECRET
with the secret you chose for the metadata proxy.[DEFAULT] ... service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret =
METADATA_SECRET
On the controller node, restart the Compute API service:
# service nova-api restart
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build virtual networking framework for instances.
Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file.Add the following keys to the
[ml2]
section:[ml2] ... type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch
Add the following keys to the
[ml2_type_gre]
section:[ml2_type_gre] ... tunnel_id_ranges = 1:1000
Add the
[ovs]
section and the following keys to it:Replace
INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
with the IP address of the instance tunnels network interface on your network node.[ovs] ... local_ip =
INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
tunnel_type = gre enable_tunneling = TrueAdd the
[securitygroup]
section and the following keys to it:[securitygroup] ... firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework
for instances. The integration bridge br-int
handles
internal instance network traffic within OVS. The external bridge
br-ext
handles external instance network traffic
within OVS. The external bridge requires a port on the physical external
network interface to provide instances with external network access.
In essence, this port bridges the virtual and physical external
networks in your environment.
Restart the OVS service:
# service openvswitch-switch restart
Add the integration bridge:
# ovs-vsctl add-br br-int
Add the external bridge:
# ovs-vsctl add-br br-ex
Add a port to the external bridge that connects to the physical external network interface:
Replace
INTERFACE_NAME
with the actual interface name. For example, eth2 or ens256.# ovs-vsctl add-port br-ex
INTERFACE_NAME
Note Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.
To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K
INTERFACE_NAME
gro off