Host aggregates are a mechanism to further partition an availability zone; while availability zones are visible to users, host aggregates are only visible to administrators. Host aggregates started out as a way to use Xen hypervisor resource pools, but has been generalized to provide a mechanism to allow administrators to assign key-value pairs to groups of machines. Each node can have multiple aggregates, each aggregate can have multiple key-value pairs, and the same key-value pair can be assigned to multiple aggregate. This information can be used in the scheduler to enable advanced scheduling, to set up Xen hypervisor resources pools or to define logical groups for migration.
The nova command-line tool supports the following aggregate-related commands.
- nova aggregate-list
Print a list of all aggregates.
- nova aggregate-create
<name>
<availability-zone>
Create a new aggregate named
<name>
in availability zone<availability-zone>
. Returns the ID of the newly created aggregate.- nova aggregate-delete
<id>
Delete an aggregate with id
<id>
.- nova aggregate-details
<id>
Show details of the aggregate with id
<id>
.- nova aggregate-add-host
<id>
<host>
Add host with name
<host>
to aggregate with id<id>
.- nova aggregate-remove-host
<id>
<host>
Remove the host with name
<host>
from the aggregate with id<id>
.- nova aggregate-set-metadata
<id>
<key=value>
[<key=value>
...] Add or update metadata (key-value pairs) associated with the aggregate with id
<id>
.- nova aggregate-update
<id>
<name>
[<availability_zone>
] Update the aggregate's name and optionally availability zone.
- nova host-list
List all hosts by service.
- nova host-update --maintenance [enable | disable]
Put/resume host into/from maintenance.
Note | |
---|---|
These commands are only accessible to administrators. If the username and tenant
you are using to access the Compute service do not have the ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b4864) ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1) |
One common use case for host aggregates is when you want to support scheduling instances to a subset of compute hosts because they have a specific capability. For example, you may want to allow users to request compute hosts that have SSD drives if they need access to faster disk I/O, or access to compute hosts that have GPU cards to take advantage of GPU-accelerated code.
To configure the scheduler to support host aggregates, the
scheduler_default_filters
configuration option must contain the
AggregateInstanceExtraSpecsFilter
in addition to the other
filters used by the scheduler. Add the following line to
/etc/nova/nova.conf
on the host that runs the nova-scheduler
service to enable host aggregates filtering, as well as the other filters that are
typically
enabled:
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter
In this example, we configure the Compute service to allow users to request nodes that
have solid-state drives (SSDs). We create a new host aggregate called
fast-io
in the availability zone called nova
,
we add the key-value pair ssd=true
to the aggregate, and then we add
compute nodes node1
, and node2
to
it.
$ nova aggregate-create fast-io nova +----+---------+-------------------+-------+----------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------+-------------------+-------+----------+ | 1 | fast-io | nova | | | +----+---------+-------------------+-------+----------+ $ nova aggregate-set-metadata 1 ssd=true +----+---------+-------------------+-------+-------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------+-------------------+-------+-------------------+ | 1 | fast-io | nova | [] | {u'ssd': u'true'} | +----+---------+-------------------+-------+-------------------+ $ nova aggregate-add-host 1 node1 +----+---------+-------------------+-----------+-------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------+-------------------+------------+-------------------+ | 1 | fast-io | nova | [u'node1'] | {u'ssd': u'true'} | +----+---------+-------------------+------------+-------------------+ $ nova aggregate-add-host 1 node2 +----+---------+-------------------+---------------------+-------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------+-------------------+----------------------+-------------------+ | 1 | fast-io | nova | [u'node1', u'node2'] | {u'ssd': u'true'} | +----+---------+-------------------+----------------------+-------------------+
Next, we use the nova flavor-create command to create a new flavor
called ssd.large
with an ID of 6, 8GB of RAM, 80GB root disk, and 4
vCPUs.
$ nova flavor-create ssd.large 6 8192 80 4 +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+ | 6 | ssd.large | 8192 | 80 | 0 | | 4 | 1 | True | {} | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
Once the flavor has been created, we specify one or more key-value pair that must
match the key-value pairs on the host aggregates. In this case, there's only one
key-value pair, ssd=true
. Setting a key-value pair on a flavor is
done using the nova-manage instance_type set_key
command.
# nova-manage instance_type set_key --name=ssd.large --key=ssd --value=true
Once it is set, you should see the extra_specs
property of the
ssd.large
flavor populated with a key of ssd
and a corresponding value of
true
.
$ nova flavor-show ssd.large +----------------------------+-------------------+ | Property | Value | +----------------------------+-------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | extra_specs | {u'ssd': u'true'} | | id | 6 | | name | ssd.large | | os-flavor-access:is_public | True | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+-------------------+
Now, when a user requests an instance with the ssd.large
flavor,
the scheduler will only consider hosts with the ssd=true
key-value
pair. In this example, that would only be node1
and
node2
.
When using the XenAPI-based hypervisor, the Compute service uses host aggregates to manage XenServer Resource pools, which are used in supporting live migration. See Configuring Migrations for details on how to create these kinds of host aggregates to support live migration.