Each vendor can choose to implement additional API extensions to the core API. This section describes the extensions for each plug-in.
These sections explain NSX plug-in extensions.
The VMware NSX QoS extension rate-limits network ports to guarantee a specific
amount of bandwidth for each port. This extension, by default, is only
accessible by a tenant with an admin role but is configurable through the
policy.json
file. To use this extension, create a queue
and specify the min/max bandwidth rates (kbps) and optionally set the QoS
Marking and DSCP value (if your network fabric uses these values to make
forwarding decisions). Once created, you can associate a queue with a network.
Then, when ports are created on that network they are automatically created and
associated with the specific queue size that was associated with the network.
Because one size queue for a every port on a network might not be optimal, a
scaling factor from the Nova flavor 'rxtx_factor' is passed in from Compute when
creating the port to scale the queue.
Lastly, if you want to set a specific baseline QoS policy for the amount of bandwidth a single port can use (unless a network queue is specified with the network a port is created on) a default queue can be created in Networking which then causes ports created to be associated with a queue of that size times the rxtx scaling factor. Note that after a network or default queue is specified, queues are added to ports that are subsequently created but are not added to existing ports.
Attribute name | Type | Default Value | Description |
---|---|---|---|
id | uuid-str | generated | UUID for the QoS queue. |
default | Boolean | False by default | If True, ports are created with this queue size unless the network port is created or associated with a queue at port creation time. |
name | String | None | Name for QoS queue. |
min | Integer | 0 | Minimum Bandwidth Rate (kbps). |
max | Integer | N/A | Maximum Bandwidth Rate (kbps). |
qos_marking | String | untrusted by default | Whether QoS marking should be trusted or untrusted. |
dscp | Integer | 0 | DSCP Marking value. |
tenant_id | uuid-str | N/A | The owner of the QoS queue. |
This table shows example neutron commands that enable you to complete basic queue operations:
Operation | Command |
---|---|
Creates QoS Queue (admin-only). |
$ neutron queue-create--min 10 --max 1000 myqueue |
Associates a queue with a network. |
$ neutron net-create network --queue_id=<queue_id> |
Creates a default system queue. |
$ neutron queue-create --default True --min 10 --max 2000 default |
Lists QoS queues. |
$ neutron queue-list |
Deletes a QoS queue. |
$ neutron queue-delete <queue_id or name>' |
Provider networks can be implemented in different ways by the underlying NSX platform.
The FLAT and VLAN network types use
bridged transport connectors. These network types enable the attachment of large
number of ports. To handle the increased scale, the NSX plug-in can back a
single OpenStack Network with a chain of NSX logical switches. You can specify
the maximum number of ports on each logical switch in this chain on the
max_lp_per_bridged_ls
parameter, which has a default
value of 5,000.
The recommended value for this parameter varies with the NSX version running in the back-end, as shown in the following table.
NSX version | Recommended Value |
2.x | 64 |
3.0.x | 5,000 |
3.1.x | 5,000 |
3.2.x | 10,000 |
In addition to these network types, the NSX plug-in also supports a special l3_ext network type, which maps external networks to specific NSX gateway services as discussed in the next section.
NSX exposes its L3 capabilities through gateway services which are usually
configured out of band from OpenStack. To use NSX with L3 capabilities, first
create a L3 gateway service in the NSX Manager. Next, in
/etc/neutron/plugins/vmware/nsx.ini
set
default_l3_gw_service_uuid
to this value. By default,
routers are mapped to this gateway service.
Create external network and map it to a specific NSX gateway service:
$ neutron net-create public --router:external=True --provider:network_type l3_ext \ --provider:physical_network <L3-Gateway-Service-UUID>
Terminate traffic on a specific VLAN from a NSX gateway service:
$ neutron net-create public --router:external=True --provider:network_type l3_ext \ --provider:physical_network <L3-Gateway-Service-UUID> --provider:segmentation_id <VLAN_ID>
Starting with the Havana release, the VMware NSX plug-in provides an asynchronous mechanism for retrieving the operational status for neutron resources from the NSX back-end; this applies to network, port, and router resources.
The back-end is polled periodically, and the status for every resource is
retrieved; then the status in the Networking database is updated only for the
resources for which a status change occurred. As operational status is now
retrieved asynchronously, performance for GET
operations is
consistently improved.
Data to retrieve from the back-end are divided in chunks in order to avoid expensive API requests; this is achieved leveraging NSX APIs response paging capabilities. The minimum chunk size can be specified using a configuration option; the actual chunk size is then determined dynamically according to: total number of resources to retrieve, interval between two synchronization task runs, minimum delay between two subsequent requests to the NSX back-end.
The operational status synchronization can be tuned or disabled using the configuration options reported in this table; it is however worth noting that the default values work fine in most cases.
Option name | Group | Default value | Type and constraints | Notes |
---|---|---|---|---|
state_sync_interval |
nsx_sync |
120 seconds | Integer; no constraint. | Interval in seconds between two run of the synchronization task. If
the synchronization task takes more than
state_sync_interval seconds to execute, a new
instance of the task is started as soon as the other is completed.
Setting the value for this option to 0 will disable the
synchronization task. |
max_random_sync_delay |
nsx_sync |
0 seconds | Integer. Must not exceed min_sync_req_delay |
When different from zero, a random delay between 0 and
max_random_sync_delay will be added before
processing the next chunk. |
min_sync_req_delay |
nsx_sync |
10 seconds | Integer. Must not exceed
state_sync_interval . |
The value of this option can be tuned according to the observed load on the NSX controllers. Lower values will result in faster synchronization, but might increase the load on the controller cluster. |
min_chunk_size |
nsx_sync |
500 resources | Integer; no constraint. | Minimum number of resources to retrieve from the back-end for each
synchronization chunk. The expected number of synchronization chunks
is given by the ratio between state_sync_interval
and min_sync_req_delay . This size of a chunk
might increase if the total number of resources is such that more
than min_chunk_size resources must be fetched in
one chunk with the current number of chunks. |
always_read_status |
nsx_sync |
False | Boolean; no constraint. | When this option is enabled, the operational status will always be
retrieved from the NSX back-end ad every GET
request. In this case it is advisable to disable the synchronization
task. |
When running multiple OpenStack Networking server instances, the status
synchronization task should not run on every node; doing so sends unnecessary
traffic to the NSX back-end and performs unnecessary DB operations. Set the
state_sync_interval
configuration option to a non-zero
value exclusively on a node designated for back-end status
synchronization.
The fields=status
parameter in Networking API requests
always triggers an explicit query to the NSX back end, even when you enable
asynchronous state synchronization. For example, GET
/v2.0/networks/<net-id>?fields=status&fields=name
.
This section explains the Big Switch Neutron plug-in-specific extension.
Big Switch allows router rules to be added to each tenant router. These rules can be used to enforce routing policies such as denying traffic between subnets or traffic to external networks. By enforcing these at the router level, network segmentation policies can be enforced across many VMs that have differing security groups.
Each tenant router has a set of router rules associated with it. Each router rule has the attributes in this table. Router rules and their attributes can be set using the neutron router-update command, through the Horizon interface or the Neutron API.
Attribute name | Required | Input Type | Description |
---|---|---|---|
source | Yes | A valid CIDR or one of the keywords 'any' or 'external' | The network that a packet's source IP must match for the rule to be applied |
destination | Yes | A valid CIDR or one of the keywords 'any' or 'external' | The network that a packet's destination IP must match for the rule to be applied |
action | Yes | 'permit' or 'deny' | Determines whether or not the matched packets will allowed to cross the router |
nexthop | No | A plus-separated (+) list of next-hop IP addresses. For example,
1.1.1.1+1.1.1.2 . |
Overrides the default virtual router used to handle traffic for packets that match the rule |
The order of router rules has no effect. Overlapping rules are evaluated using longest prefix matching on the source and destination fields. The source field is matched first so it always takes higher precedence over the destination field. In other words, longest prefix matching is used on the destination field only if there are multiple matching rules with the same source.
Router rules are configured with a router update operation in OpenStack Networking. The update overrides any previous rules so all rules must be provided at the same time.
Update a router with rules to permit traffic by default but block traffic from external networks to the 10.10.10.0/24 subnet:
$ neutron router-update Router-UUID
--router_rules type=dict list=true\
source=any,destination=any,action=permit \
source=external,destination=10.10.10.0/24,action=deny
Specify alternate next-hop addresses for a specific subnet:
$ neutron router-update Router-UUID
--router_rules type=dict list=true\
source=any,destination=any,action=permit \
source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253
Block traffic between two subnets while allowing everything else:
$ neutron router-update Router-UUID
--router_rules type=dict list=true\
source=any,destination=any,action=permit \
source=10.10.10.0/24,destination=10.20.20.20/24,action=deny