Nodepool has two components which run as daemons. The nodepool-builder daemon is responsible for building diskimages and uploading them to providers, and the nodepoold daemon is responsible for launching and deleting nodes.
Both daemons frequently re-read their configuration file after starting to support adding or removing new images and providers, or otherwise altering the configuration.
The nodepool-builder daemon builds and uploads images to providers. It may be run on the same or a separate host as the main nodepool daemon. Multiple instances of nodepool-builder may be run on the same or separate hosts in order to speed up image builds across many machines, or supply high-availability or redundancy. However, since nodepool-builder allows specification of the number of both build and upload threads, it is usually not advantageous to run more than a single instance on one machine. Note that while diskimage-builder (which is responsible for building the underlying images) generally supports executing multiple builds on a single machine simultaneously, some of the elements it uses may not. To be safe, it is recommended to run a single instance of nodepool-builder on a machine, and configure that instance to run only a single build thread (the default).
The main nodepool daemon is named nodepoold and is responsible for launching instances from the images created and uploaded by nodepool-builder.
When a new image is created and uploaded, nodepoold will immediately start using it when launching nodes (Nodepool always uses the most recent image for a given provider in the ready state). Nodepool will delete images if they are not the most recent or second most recent ready images. In other words, Nodepool will always make sure that in addition to the current image, it keeps the previous image around. This way if you find that a newly created image is problematic, you may simply delete it and Nodepool will revert to using the previous image.
To start the main Nodepool daemon, run nodepoold:
usage: nodepoold [-h] [-c CONFIG] [-s SECURE] [-d] [-l LOGCONFIG] [-p PIDFILE]
[--no-builder] [--build-workers BUILD_WORKERS]
[--upload-workers UPLOAD_WORKERS] [--no-deletes]
[--no-launches] [--no-webapp] [--version]
Node pool.
optional arguments:
-h, --help show this help message and exit
-c CONFIG path to config file
-s SECURE path to secure file
-d do not run as a daemon
-l LOGCONFIG path to log config file
-p PIDFILE path to pid file
--no-builder
--build-workers BUILD_WORKERS
number of build workers
--upload-workers UPLOAD_WORKERS
number of upload workers
--no-deletes
--no-launches
--no-webapp
--version show version
To start the nodepool-builder daemon, run nodepool–builder:
usage: nodepool-builder [-h] [-c CONFIG] [-l LOGCONFIG] [-p PIDFILE] [-d]
[--build-workers BUILD_WORKERS]
[--upload-workers UPLOAD_WORKERS]
NodePool Image Builder.
optional arguments:
-h, --help show this help message and exit
-c CONFIG path to config file
-l LOGCONFIG path to log config file
-p PIDFILE path to pid file
-d do not run as a daemon
--build-workers BUILD_WORKERS
number of build workers
--upload-workers UPLOAD_WORKERS
number of upload workers
To stop a daemon, send SIGINT to the process.
When yappi (Yet Another Python Profiler) is available, additional functions’ and threads’ stats are emitted as well. The first SIGUSR2 will enable yappi, on the second SIGUSR2 it dumps the information collected, resets all yappi state and stops profiling. This is to minimize the impact of yappi on a running system.
When Nodepool creates instances, it will assign the following nova metadata:
- groups
- A json-encoded list containing the name of the image and the name of the provider. This may be used by the Ansible OpenStack inventory plugin.
- nodepool
A json-encoded dictionary with the following entries:
- image_name
- The name of the image as a string.
- provider_name
- The name of the provider as a string.
- node_id
- The nodepool id of the node as an integer.
The general options that apply to all subcommands are:
usage: nodepool [-h] [-c CONFIG] [-s SECURE] [-l LOGCONFIG] [--version]
[--debug]
{list,image-list,dib-image-list,image-build,alien-list,alien-image-list,hold,delete,image-delete,dib-image-delete,config-validate,job-list,job-create,job-delete}
...
Node pool.
optional arguments:
-h, --help show this help message and exit
-c CONFIG path to config file
-s SECURE path to secure file
-l LOGCONFIG path to log config file
--version show version
--debug show DEBUG level logging
commands:
valid commands
{list,image-list,dib-image-list,image-build,alien-list,alien-image-list,hold,delete,image-delete,dib-image-delete,config-validate,job-list,job-create,job-delete}
additional help
list list nodes
image-list list images from providers
dib-image-list list images built with diskimage-builder
image-build build image using diskimage-builder
alien-list list nodes not accounted for by nodepool
alien-image-list list images not accounted for by nodepool
hold place a node in the HOLD state
delete place a node in the DELETE state
image-delete delete an image
dib-image-delete delete image built with diskimage-builder
config-validate Validate configuration file
job-list list jobs
job-create create job
job-delete delete job
The following subcommands deal with nodepool images:
usage: nodepool dib-image-list [-h]
optional arguments:
-h, --help show this help message and exit
usage: nodepool image-list [-h]
optional arguments:
-h, --help show this help message and exit
usage: nodepool image-build [-h] image
positional arguments:
image image name
optional arguments:
-h, --help show this help message and exit
usage: nodepool dib-image-delete [-h] id
positional arguments:
id dib image id
optional arguments:
-h, --help show this help message and exit
usage: nodepool image-delete [-h] --provider PROVIDER --image IMAGE
--upload-id UPLOAD_ID --build-id BUILD_ID
optional arguments:
-h, --help show this help message and exit
--provider PROVIDER provider name
--image IMAGE image name
--upload-id UPLOAD_ID
image upload id
--build-id BUILD_ID image build id
The following subcommands deal with nodepool nodes:
usage: nodepool list [-h]
optional arguments:
-h, --help show this help message and exit
usage: nodepool hold [-h] [--reason REASON] id
positional arguments:
id node id
optional arguments:
-h, --help show this help message and exit
--reason REASON Optional reason this node is held
usage: nodepool delete [-h] [--now] id
positional arguments:
id node id
optional arguments:
-h, --help show this help message and exit
--now delete the node in the foreground
If Nodepool’s database gets out of sync with reality, the following commands can help identify compute instances or images that are unknown to Nodepool:
usage: nodepool alien-list [-h] [provider]
positional arguments:
provider provider name
optional arguments:
-h, --help show this help message and exit
usage: nodepool alien-image-list [-h] [provider]
positional arguments:
provider provider name
optional arguments:
-h, --help show this help message and exit
In the case that a job is randomly failing for an unknown cause, it may be necessary to instruct nodepool to automatically hold a node on which that job has failed. To do so, use the job-create command to specify the job name and how many failed nodes should be held. When debugging is complete, use ‘’job-delete’’ to disable the feature.
usage: nodepool job-create [-h] [--hold-on-failure HOLD_ON_FAILURE] name
positional arguments:
name job name
optional arguments:
-h, --help show this help message and exit
--hold-on-failure HOLD_ON_FAILURE
number of nodes to hold when this job fails
usage: nodepool job-list [-h]
optional arguments:
-h, --help show this help message and exit
usage: nodepool job-delete [-h] id
positional arguments:
id job id
optional arguments:
-h, --help show this help message and exit
To remove a provider, remove all of the images from that provider`s configuration (and remove all instances of that provider from any labels) and set that provider’s max-servers to -1. This will instruct Nodepool to delete any images uploaded to that provider, not upload any new ones, and stop booting new nodes on the provider. You can then let the nodes go through their normal lifecycle. Once all nodes have been deleted you remove the config from nodepool for that provider entirely (though leaving it in this state is effectively the same and makes it easy to turn the provider back on).
If urgency is required you can delete the nodes directly instead of waiting for them to go through their normal lifecycle but the effect is the same.