OpenShift Origin provides developers and IT organizations an open source auto-scaling cloud application platform for quickly deploying new applications on secure and scalable resources with minimal configuration and management headaches. This means increased developer productivity and a faster pace in which IT can support innovation. This guide covers the basics of administering a private Platform-as-a-Service built with this awesome open source solution.

There are several ways to install / deploy OpenShift.

If you don’t have OpenShift Origin up and running, have a look at these deployment options first.

Platform as a Service

Platform as a Service is changing the way developers approach developing software. Developers typically use a local sandbox with their preferred application server and only deploy locally on that instance. For instance, developers typically start JBoss EAP locally using the startup.sh command and drop their .war or .ear file in the deployment directory and they are done. Developers have a hard time understanding why deploying to the production infrastructure is such a time consuming process.

System Administrators understand the complexity of not only deploying the code, but procuring, provisioning and maintaining a production level system. They need to stay up to date on the latest security patches and errata, ensure the firewall is properly configured, maintain a consistent and reliable backup and restore plan, monitor the application and servers for CPU load, disk IO, HTTP requests, etc.

Managing an OpenShift Origin System

This manual covers some of the most basic things that you will need to do to manage an OpenShift Origin instance. This guide does not cover the management of some necessary support systems, including a messaging service like ActiveMQ and a MongoDB instance.

1. User Resource Management

Server used:

  • node host

  • broker host

Tools used:

  • text editor

  • oo-admin-ctl-user

1.1. Set Default Gear Quotas and Sizes

A user’s default gear size and quota is specified in the /etc/openshift/broker.conf configuration file located on the broker host.

The VALID_GEAR_SIZES setting is not applied to users but specifies the gear sizes that the current OpenShift Origin PaaS installation supports.

The DEFAULT_MAX_GEARS setting specifies the number of gears to assign to all users upon user creation. This is the total number of gears that a user can create by default.

The DEFAULT_GEAR_SIZE setting specifies the size of gear that a newly created user has access to.

Take a look at the /etc/openshift/broker.conf configuration file to determine the current settings for your installation:

Execute the following on the broker host:

# cat /etc/openshift/broker.conf

By default, OpenShift Origin sets the default gear size to small and the number of gears a user can create to 100.

When changing the /etc/openshift/broker.conf configuration file, keep in mind that the existing settings are cached until you restart the openshift-broker service.

1.2. Set the Number of Gears a Specific User Can Create

There are often times when you want to increase or decrease the number of gears a particular user can consume without modifying the setting for all existing users. OpenShift Origin provides a command that will allow the administrator to configure settings for an individual user. To see all of the available options that can be performed on a specific user, enter the following command on the broker host:

# oo-admin-ctl-user

To see how many gears that a given user has consumed as well as how many gears they have access to create, you can provide the following switches to the oo-admin-ctl-user command:

# oo-admin-ctl-user -l <username>

You should see something similar to this:

User <username>:
    consumed gears: 0
    max gears: 100
    gear sizes: small

In order to change the number of gears that the user has permission to create, you can pass the --setmaxgears switch to the command. For instance, if we only want to allow a user to be able to create 25 gears, we would use the following command:

# oo-admin-ctl-user -l <username> --setmaxgears 25

After entering the above command, you should see output like this:

Setting max_gears to 25... Done.
User <username>:
  consumed gears: 0
  max gears: 25
  gear sizes: small

1.3. Set the Type of Gears a Specific User Can Create

In a production environment, a customer will typically have different gear sizes that are available for developers to consume. In this example, we will only create small gears. However, to add the ability to create medium size gears for a given user, you can pass the --addgearsize switch to the oo-admin-ctl-user command:

# oo-admin-ctl-user -l <username> --addgearsize medium

After entering the above command, you should see output like:

Adding gear size medium for user <username>... Done.
User <username>:
  consumed gears: 0
  max gears: 25
  gear sizes: small, medium

In order to remove the ability for a user to create a specific gear size, you can use the --removegearsize switch:

# oo-admin-ctl-user -l <username> --removegearsize medium

2. Capacity Planning and Districts

Server used:

  • node host

  • broker host

Tools used:

  • text editor

  • oo-admin-ctl-district

Districts facilitate moving gears between node hosts in order to manage resource usage. They also make it possible to deactivate nodes so they receive no further gears. As it is difficult to introduce districts to an installation after it is in use, they should be created from the start when it is quite simple.

2.1. Hierarchy of OpenShift Entities

In order to explain how districts figure into OpenShift, we first need to examine their place in OpenShift’s containership hierarchy.

At the bottom of the hierarchy, gears contain instances of one or more cartridges.

Node hosts contain gears, which are really just Linux users on the host, with storage and processes constrained by various mechanisms.

Districts, if used, contain a set of node hosts and the gears that reside on them.

At the top of the hierarchy is the node profile (a.k.a. "gear profile" or "gear size"), which is not so much a container as a label attached to a set of OpenShift node hosts. Districts also have a node profile, and all the nodes of a district must have that node profile. A node host or district can only contain gears for one profile.

Applications contain one or more gears, which must currently all have one profile. An application’s gears may span multiple nodes in multiple districts; there is no good way to control placement on either.

2.2. The Purpose of Districts

Districts define a set of node hosts within which gears can be reliably moved to manage the resource usage of those nodes. While not strictly required for a basic OpenShift Origin installation, their use is recommended where administrators might ever need to move gears between nodes; that is, just about any installation that will see use outside a test lab.

Gears are allocated resources including an external port range and IP address range, which are calculated according to their numeric Linux user ID (UID) on the node host. A gear can only be moved to a node host where its UID is not already in use. Districts work by reserving a UID for the gear across all of the node hosts in the district, meaning only the node hosting the gear will use its UID. This allows the gear to maintain the same UID (and related resources) when moved to any other node within the district.

In addition, the district pool of UIDs (6000 of them due to the limited range of external ports) is allocated to gears randomly (rather than sequentially), which makes it more likely that even if a gear is moved to a new district, its UID will be available. Without districts, nodes allocate gear UIDs locally and sequentially, making it extremely likely that a gear’s UID will be in use on other nodes.

In the past, it was possible to change a gear’s UID when moving it, which required that it be reconfigured for the related resources in order to continue to function normally. However, this made cartridge maintenance difficult due to the corner cases introduced, and did nothing to help application developers who hard-coded resource settings into their applications (where they couldn’t be updated automatically) rather than using environment variables which could be updated during a move. In the end, disallowing UID changes during a move and using districts to reserve UIDs saves developers and administrators time and trouble.

One other function of districts should be mentioned: a node host can be marked as deactivated, so that the broker gives it no additional gears. The existing gears continue to run until they are destroyed or moved to another node. This enables decommissioning a node with minimal disruption to its gears.

2.3. Enabling Districts on the Broker

To use districts, the broker’s MCollective plugin must be configured to enable districts. Edit the /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf configuration file and confirm the following parameters are set:

Confirm the following on the broker host:

DISTRICTS_ENABLED=true
NODE_PROFILE_ENABLED=true

These are the default settings in the config file. These ensure that districts will be used if they are created. There is one more setting that should be changed in this file:

DISTRICTS_REQUIRE_FOR_APP_CREATE=true

The default of "false" allows undistricted nodes to be used when no district exists in the profile with capacity for gears; this default enables nodes in a trial install to be used immediately without having to understand or implement districts. However, in a production system using districts, it would be undesirable for gears to be placed on a node before it is districted (which could happen if no districted node has capacity), because nodes cannot be placed in a district once they host any gears. So, change this value to "true" to completely prevent the use of undistricted nodes.

2.4. Creating and Populating Districts

To create a district that will support a gear profile of "small", we will use the oo-admin-ctl-district command. After defining the district, we can add our node host (node.example.com) as the only node in that district. Execute the following commands to create a district named small_district which can only hold small gear types:

Execute the following on the broker host:

# oo-admin-ctl-district -c create -n small_district -p small

If the command was successful, you should see output similar to the following:

Successfully created district: 513b50508f9f44aeb90090f19d2fd940

{"name"=>"small_district",
 "active_servers_size"=>0,
 "gear_size"=>"small",
 "max_uid"=>6999,
 "created_at"=>"2013-01-15T17:18:28-05:00",
 "updated_at"=>"2013-01-15T17:18:28-05:00",
 "max_capacity"=>6000,
 "servers"=>{},
 "uuid"=>"513b50508f9f44aeb90090f19d2fd940",
 "available_uids"=>"<6000 uids hidden>",
 "available_capacity"=>6000}

2.4.1. District Representation on the Broker

If you are familiar with JSON, you will understand the format of this output. What actually happened is a new document was created in the broker's MongoDB database. To view this document inside of the database, execute the following (substitute MongoDB access parameters from broker.conf if needed):

# mongo -u openshift -p mooo openshift_broker_dev

This will drop you into the MongoDB shell where you can perform commands against the broker database. To list all of the available collections in the openshift_broker_dev database, you can issue the following command:

> db.getCollectionNames()

You should see the following collections returned:

  [ "applications", "auth_user", "cloud_users", "districts", "domains", "locks", "system.indexes", "system.users", "usage", "usage_records" ]

We can now query the districts collection to verify the creation of our small district:

> db.districts.find()

The output should be similar to:

{
	"_id": "513b50508f9f44aeb90090f19d2fd940",
	"name": "small_district",
	"active_servers_size": 0,
	"gear_size": "small",
	"max_uid": 6999,
	"created_at": "2013-01-15T17:18:28-05:00",
	"updated_at": "2013-01-15T17:18:28-05:00",
	"max_capacity": 6000,
	"servers": [],
	"uuid": "513b50508f9f44aeb90090f19d2fd940",
	"available_uids": [1000, .........],
	"available_capacity": 6000
}
The servers array does not contain any data yet.

Exit the MongoDB shell using the exit command:

> exit

2.4.2. Adding a Node Host

Now we can add our node host, node.example.com, to the small_district that we created above:

# oo-admin-ctl-district -c add-node -n small_district -i node.example.com

It is important to note that the server identity (node.example.com here) is the node’s hostname as configured on that node, which could be different from the PUBLIC_HOSTNAME configured in /etc/openshift/node.conf on the node. The PUBLIC_HOSTNAME is used in CNAME records and must resolve to the host via DNS; the hostname could be something completely different and may not resolve in DNS at all.

The hostname is recorded in MongoDB both in the district and with any gears that are hosted on the node, so changing the node’s hostname will disrupt the broker’s ability to use the node. In general, it’s wisest to use the hostname as the DNS name and not change either after install.

You should see output like the following from the node addition:

Success!

{"available_capacity"=>6000,
 "created_at"=>"2013-01-15T17:18:28-05:00",
 "updated_at"=>"2013-01-15T17:18:28-10:00",
 "available_uids"=>"<6000 uids hidden>",
 "gear_size"=>"small",
 "uuid"=>"513b50508f9f44aeb90090f19d2fd940",
 "servers"=>{"node.example.com"=>{"active"=>true}},
 "name"=>"small_district",
 "max_capacity"=>6000,
 "max_uid"=>6999,
 "active_servers_size"=>1}
If you see an error message indicating that you can’t add this node to the district because the node already has applications on it, consult the Troubleshooting Guide.

Repeat the steps above to query the database for information about districts. Notice that the servers array now contains the following information:

"servers" : [ { "name" : "node.example.com", "active" : true } ]

If you continued to add additional nodes to this district, the servers array would show all the node hosts that are assigned to the district.

This command line tool can be used just to display district information. Simply run the command with no arguments to view the JSON records in the MongoDB database for all districts:

# oo-admin-ctl-district

2.5. Removing a Node From a District

For various reasons, you may want to remove gears from a node host and the host from a district. You may find that a lot of gears on a node host become idle over time, and you may want to "compact" the district by decommissioning or re-purposing a node host. For this purpose, you need a combination of oo-admin-ctl-district and oo-admin-move and the following procedure.

As an example, suppose you had node1.example.com and node2.example.com in a district named "small_district", and wanted to remove node2.

  1. Run oo-admin-chk on a broker host and oo-accept-node on node2.example.com, and fix any problems found with the gears on node2. It’s a better idea to take care of these up front than to try to move potentially broken gears.

  2. Deactivate the node within the district. This keeps the node from accepting any further gear placements, although the existing gears continue running.

    # oo-admin-ctl-district -c deactivate-node -n small_district -i node2.example.com
  3. Move all of the gears off of node2. At this time, there is no automated way to do this. oo-admin-move just accepts a single gear. So you can write your own script to look for gears, or just manually list them and run the commands.

  4. Remove node2 from the district:

    # oo-admin-ctl-district -c remove-node -n small_district -i node2.example.com

2.6. Removing a District

In order to remove a district, first set its capacity to 0:

# oo-admin-ctl-district -c remove-capacity -n district_name -s 6000

Then, remove all gears and nodes as explained in the previous section.

Finally, remove the district itself:

# oo-admin-ctl-district -c delete -n district_name

2.7. Managing Gear Capacity

Districts and node hosts have two different capacity limits for the number of gears allowed. Districts have a fixed pool of UIDs to allocate, and can only contain 6000 gears, regardless of their state. Node host capacity, however, only constrains the number of active gears on that host.

2.7.1. Node Host

For a node host, the maximum number of active gears allowed per node is specified with the max_active_gears value in /etc/openshift/resource_limits.conf; by default it is 100, but most administrators will need to modify this. Note that stopped or idle gears are not counted toward this limit; it is possible for a node to have any number of inactive gears, bounded only by storage. It is also possible to exceed the limit by starting inactive gears after the limit has been reached - nothing prevents or corrects this; reaching the limit simply exempts the node from future gear placement by the broker.

Determining the max_active_gears limit to use involves a certain amount of prognostication on the part of an administrator. The safest way to calculate the limit is to consider the resource most likely to be exhausted first (typically RAM) and divide the amount of available resource by the resource limit per gear.

So, for example, if a node host has 7.5 GB of RAM available and gears are constrained to .5 GB RAM:

max_active_gears = 7.5GB / .5GB = 15 gears

However, in practice, most gears will not consume their entire resource quota, so this conservative limit would leave a lot of wasted resources. Most administrators will want to overcommit at least some of their systems by allowing more gears than would fit if all used all their resources; and this is where prognostication (or better, experimentation) is required. Based on the types of cartridges and applications expected in the installation and how much RAM (or other scarce resources - CPU, network bandwidth, processes, inodes…) they actually use, administrators should determine an overcommit percent by which to increase their limits.

There is no harm in changing max_active_gears after installation. It may be wisest to begin with conservative limits and adjust them upwards after empirical evidence of usage is available. It is easier to add more active gears than to move them away.

2.7.2. District

Due to current constraints, each district can only contain 6000 gears. It is important not to put too many node hosts in a district, because once a district’s UID pool is exhausted, nodes in that district will not receive any more gears, even if they have plenty of capacity; therefore, resources will be wasted. It is possible to remove excess nodes from a district by deactivating them and moving all of their gears away (known as "compacting" the district); but this should be avoided if possible to minimize disruption to the gears, and because mass moves of gears are slow and failure-prone at this time.

Districts exist to facilitate gear movement; the only advantage to having more than two or three nodes in a district is that there are fewer districts to keep track of. It is easy to add nodes to a district, and difficult to remove them. Therefore, adding nodes to districts very conservatively is wise, and it would be simplest to just plan on districts having two or three nodes.

With perfect knowledge, we could calculate how many node hosts to put in each district. It is a function of the following values:

D = district capacity (6000)
G = total number of gears per node

However, on nodes, we do not limit G; we want to make sure we are filling the capacity for active gears:

C = node capacity (max_active_gears)

For deployments that use the idler to idle inactive gears, or that stop many applications for any other reason, the percentage of active gears in the long run may be very low. It is important to take this into account because the broker will keep filling the nodes to the active limit as gears are stopped or idled, but the district capacity must also contain all those inactive gears. We can project roughly how many gears a "full" node will have in the long run by determining (guessing, at first, then adjusting):

A = percentage of gears that are active

Then our estimate of G is simply C * 100 / A, and thus the number of nodes per district should be:

N = 6000 * A / (100 * C)

For example, if only 10% of gears are active over time, and max_active_gears is 50, then 6000 * 10 / (100 * 50) = 12 (round down if needed) nodes should be added per district.

In performing this calculation with imperfect knowledge, however, it is best to be conservative by guessing a low value for A and a high value for C. Adding nodes later is much better than compacting districts.

2.7.3. Viewing Capacity Statistics

There is a tool for viewing gear usage across nodes and districts; it can be invoked on the broker:

# oo-stats

Consult the man page or the output of oo-stats -h for script arguments. By default, this tool summarizes gear usage by districts and profiles in a human-readable format. It can also produce several computer-readable formats for use by automation or monitoring.

2.8. Moving a Gear From One Node to Another

To move a gear between nodes, use the oo-admin-move tool on the broker.

Moving gears requires the rsync_id_rsa private key in the broker host’s /etc/openshift/ and that the corresponding public key be in each node host’s /root/.ssh/authorized_keys as explained in the deployment guide.

To move gears between nodes without districts, use the following command.

Execute the following on the broker host:

# oo-admin-move --gear_uuid 3baf79139b0b449d90303464dfa8dd6f -i node2.example.com
URL: http://app3-demo.example.com
Login: demo
App UUID: 3baf79139b0b449d90303464dfa8dd6f
Gear UUID: 3baf79139b0b449d90303464dfa8dd6f
DEBUG: Source district uuid: NONE
DEBUG: Destination district uuid: NONE
[...]
DEBUG: Starting cartridge 'ruby-1.8' in 'app3' after move on node2.example.com
DEBUG: Fixing DNS and mongo for gear 'app3' after move
DEBUG: Changing server identity of 'app3' from 'node1.example.com' to 'node2.example.com'
DEBUG: The gear's node profile changed from medium to small
DEBUG: Deconfiguring old app 'app3' on node1.example.com after move
Successfully moved 'app3' with gear uuid '3baf79139b0b449d90303464dfa8dd6f' from 'node1.example.com' to 'node2.example.com'

3. Adding Cartridges

Server used:

  • node host

  • broker host

Tools used:

  • yum

By default, OpenShift Origin caches certain values for faster retrieval. Clearing this cache allows the retrieval of updated settings.

For example, the first time MCollective retrieves the list of cartridges available on your nodes, the list is cached so that subsequent requests for this information are processed more quickly. If you install a new cartridge, it is unavailable to users until the cache is cleared and MCollective retrieves a new list of cartridges.

This chapter will focus on installing cartridges to allow OpenShift Origin to create JBoss gears.

3.1. List Available Cartridges

For a complete list of all cartridges that are available to install, you can perform a search using the yum command that will output all OpenShift Origin cartridges.

Run the following command on the node host:

# yum search origin-cartridge

You should see the following cartridges available to install:

  • openshift-origin-cartridge-cron.noarch : Embedded cron support for express

  • openshift-origin-cartridge-diy.noarch : Provides diy support

  • openshift-origin-cartridge-haproxy.noarch : Provides embedded haproxy-1.4 support

  • openshift-origin-cartridge-jbossas : Provides JBossAS functionality, but see the note on JBoss below

  • openshift-origin-cartridge-jbossews : Provides JBossEWS functionality, but see the note on JBoss below

  • openshift-origin-cartridge-jenkins.noarch : Provides jenkins-1 support

  • openshift-origin-cartridge-jenkins-client.noarch : Embedded jenkins client support for express

  • openshift-origin-cartridge-mysql.noarch : Provides embedded mysql support

  • openshift-origin-cartridge-perl.noarch : Provides mod_perl support

  • openshift-origin-cartridge-php.noarch : Provides php-5.3 support

  • openshift-origin-cartridge-postgresql.noarch : Provides embedded PostgreSQL support

  • openshift-origin-cartridge-python.noarch : Provides python-2.6 support

  • openshift-origin-cartridge-ruby.noarch : Provides ruby rack support running on Phusion Passenger

What about JBoss?

JBoss cartridges are distributed with OpenShift. However, they will not work without a Java application server to run against. The JBoss and WildFly application servers are not currently available as RPMs, so unfortunately we cannot include them in our dependencies repo. Refer to the OpenShift Origin M4 Release Notes for a workaround to enable JBoss cartridges.

3.2. Register the New Cartridges

From a Broker host, run the following command to poll a Node for available cartridge information:

# oo-admin-ctl-cartridge -c import-node --activate

This will automatically register the new cartridges with the Broker and make them available to users for new hosted applications.

3.3. Test the New Cartridges

Open up your preferred browser and enter the following URL, using the correct host and domain name for your environment:

http://broker.example.com

You will be prompted to authenticate and then be presented with an application creation screen. After the cache has been cleared, and assuming you have added the new cartridges correctly, you should see a screen similar to the following:

image

If you do not see the new cartridges available on the web console, check that the new cartridges are available by viewing the contents of the /usr/libexec/openshift/cartridges directory:

# cd /usr/libexec/openshift/cartridges
# ls

3.4. Install the PostgreSQL and DIY Cartridges

Using the information presented in this chapter, perform the necessary commands to install both the PostgreSQL and DIY cartridges on your node host. Verify the success of the installation by ensuring that the DIY application type is available on the web console:

image

4. Using the Administrative Console

The optional OpenShift Origin administrative console (a.k.a. "admin console") provides OpenShift administrators an at-a-glance view of an OpenShift deployment, in order to search and navigate OpenShift entities and make reasonable inferences about adding new capacity. Consult the Deployment Guide for instructions on enabling the admin console.

Note: in this first iteration, the admin console is read-only and does not enable making any changes to settings or data.

4.1. Configuration

The admin console is configured via the /etc/openshift/plugins.d/openshift-origin-admin-console.conf file (which can be overridden in a development environment with settings in the -dev.conf version of that file). The example file installed with the plugin contains lengthy comments on the available settings which we need not repeat here.

4.1.1. Access control

Notably absent from the config file is any sort of access control. There is no concept of an OpenShift administrative role. Either a visitor can browse to the admin console or not, so the place to control access is with proxy configuration. Keep in mind that the current admin console is informational only and any actions to be taken require logging in to an OpenShift host.

4.1.2. Capacity planning

The front page of the admin console provides a visual and numeric summary of the capacity and usage of the entire installation. It can also be configured to provide suggestions for when an administrator should adjust capacity. As no two OpenShift environments are quite alike, the default is not to set any thresholds, and thus to make no capacity suggestions. Configuring the capacity planning settings in the config file enables suggestions that can help draw administrator attention to current or impending capacity problems: for example, where to add nodes to ensure a particular profile can continue to create gears, or where capacity is being wasted.

Please reference the main capacity planning section in this document to understand the information the admin console is displaying here and the significance of the settings. Suggestions for adding and removing capacity are based on both the settings as well as the existing data, with a bias toward being conservative in putting nodes in districts. In particular, in making that calculation, if the observed active gear percent is lower than expected, the observed percent will be used, and if the nodes do not all have the same max_active_gears limit, the largest will be used.

Note that the capacity data and suggestions are generated and cached (for one hour unless configured otherwise). If changes you expect to see haven’t shown up, you likely just need to refresh the data by clicking on the refresh icon in any page.

4.1.3. Loading data from a file

The admin console uses the same Admin Stats library used by oo-stats to collect capacity data. In fact, you can record YAML or JSON output from oo-stats and use this directly instead of the actual system data:

# oo-stats -f yaml > /tmp/stats.yaml

Then copy this file to where you have the admin-console loaded, configure it as STATS_FROM_FILE in the configuration file, adjust its context as described below, and restart the broker. Capacity views and suggestions will all be based on the loaded data (although navigation will still only work for entities actually present).

You need to ensure that the broker can actually read the data file. Because SELinux limits what the broker application can read (for example, it cannot ordinarily read /tmp entries), the file’s context will likely need adjustment as follows:

# chcon system_u:object_r:httpd_sys_content_t:s0 /tmp/stats.yaml

4.2. Exposed data

One of the goals for the admin console is to expose OpenShift system data for use by external tools. As a small step toward that goal, it is possible to retrieve the raw data from some of the application controllers as JSON. Note that this should not be considered the long-term API and is likely to change in future releases. You can access the following URLs when added to the appropriate server name, e.g. you could access /admin-console/capacity/profiles.json on the broker with the following command:

# curl http://localhost:8080/admin-console/capacity/profiles.json
  • /admin-console/capacity/profiles.json - this returns all profile summaries from the Admin Stats library (the same library used by oo-stats). Add the ?reload=1 parameter to ensure the data is fresh rather than cached.

  • /admin-console/stats/gears_per_user.json - this returns frequency data for gears owned by a user

  • /admin-console/stats/apps_per_domain.json - this returns frequency data for apps belonging to a domain

  • /admin-console/stats/domains_per_user.json - this returns frequency data for domains owned by a user

''Under Construction'' - by no means to be considered complete or even necessarily correct.

This is a guide to the tools and information you need in order to manage your OpenShift deployment once installed. For installation instructions, refer to the .

5. Administrative CLI Tools

This section is a quick reference to some important administrative command-line tools provided as part of OpenShift. Familiarity with these tools will assist in most administrative tasks.

5.1. Broker Host Tools

These tools are installed with the openshift-origin-broker and openshift-origin-broker-util RPMs.

5.1.1. oo-accept-broker

This script checks that broker setup is valid and functional. It is run without options on a broker.

If there are no errors, it simply prints "PASS" and exits with return code 0 (unless the -v option is added, in which case it also prints the checks that it is performing).

If there are errors, they are printed, and the return code is the number of errors.

# oo-accept-broker -v
INFO: SERVICES: DATA: mongo, Auth: mongo, Name bind
INFO: AUTH_MODULE: rubygem-openshift-origin-auth-mongo
INFO: NAME_MODULE: rubygem-openshift-origin-dns-bind
INFO: Broker package is: openshift-origin-broker
INFO: checking packages
INFO: checking package ruby
INFO: checking package rubygems
INFO: checking package rubygem-rails
INFO: checking package rubygem-passenger
INFO: checking package rubygem-openshift-origin-common
INFO: checking package rubygem-openshift-origin-controller
INFO: checking package openshift-origin-broker
INFO: checking ruby requirements
INFO: checking ruby requirements for openshift-origin-controller
INFO: checking ruby requirements for config/application
INFO: checking firewall settings
INFO: checking services
INFO: checking datastore
INFO: checking cloud user authentication
INFO: auth plugin = /var/www/openshift/broker/config/initializers/broker.rb:2: uninitialized constant ApplicationObserver (NameError) from -:6
INFO: checking dynamic dns plugin
INFO: checking messaging configuration
PASS

This is a good monitoring script to make sure nothing has gone wrong with a broker host.

5.1.2. oo-admin-chk

This script checks that application records in the MongoDB datastore are consistent with gear presence on the node hosts. It is a good sanity check for proper system operation.

Typical output:

oo-admin-chk -v
Checking application gears in respective nodes
Checking node gears in application database
Success

(Without -v you just get the "Success" line.)

If this does not run cleanly, consult the Troubleshooting Guide for hints.

5.1.3. oo-register-dns

A utility for updating DNS A records in BIND (generally for a broker or node host, though could be other infrastructure hosts. Do not use this to change DNS records for apps/gears, as those are CNAME records). It just wraps an nsupdate command.

# oo-register-dns -?
== Synopsis

oo-register-dns: Register node's DNS name with Bind
  This command must be run as root.

== Usage

oo-register-dns --with-node-hostname node1 \
               --with-node-ip 192.168.0.1 \
               --domain example.com

== List of arguments
 -h|--with-node-hostname   host        Hostname for the node (required)
 -n|--with-node-ip         ip          IP of the node (required)
 -d|--domain               domain      Domain name for this node (optional, default: example.com)
 -k|--key-file             file        Bind key (optional, default: /var/named/<domain name>.key)
 -?|--help                             Print this message

5.1.4. oo-admin-ctl-district

This is a utility for all district operations. See the full explanation in a later section. The options are as follows:

# oo-admin-ctl-district -h
== Synopsis

oo-admin-ctl-district: Control districts

== Usage

oo-admin-ctl-district OPTIONS

Options:
-u|--uuid     <district uuid>
   District uuid  (alphanumeric, canonical way to identify the district)
-c|--command <command>
   (add-node|remove-node|deactivate-node|activate-node|add-capacity|remove-capacity|create|destroy)
-n|--name <district name>
   District name (Arbitrary identifier, used on create or in place of uuid on other commands)
-p|--node_profile <gear_size>
   (e.g. small|medium) Specify gear profile when creating a district
-i|--server_identity
   Node server_identity (FQDN, required when operating on a node)
-s|--size
   Capacity to add or remove (positive number) (required for capacity operations)
-b|--bypass
   Ignore warnings
-h|--help
   Show usage info

5.1.5. oo-admin-move

Used to move a gear from one node in a district to another, or even outside its district.

# oo-admin-move -h
== Synopsis

oo-admin-move: Move an app from one node to another

== Usage

oo-admin-move OPTIONS

Options:
--gear_uuid <gear_uuid>
    Gear uuid to move
--destination_district_uuid <district_uuid>
   Destination district uuid
-i|--target_server_identity <server_identity>
   Target server identity
-p|--node_profile <node_profile>
   Node profile
-t|--timeout
   timeout
--allow_change_district
   Allow the move to be between districts
-h|--help
   Show Usage info

5.1.6. oo-admin-ctl-user

This is used to administer what a user is allowed to use on the system, mainly number and type of gears. Note that this is not the full usage listing as we won’t discuss subaccounts until a later installment of the documentation.

 # oo-admin-ctl-user -h

Options:
 -l|--rhlogin <rhlogin>
   OpenShift login  (required)
 --setmaxgears <number>
   Set the maximum number of gears a user is allowed to use
 --setconsumedgears <number>
   Set the number of gears a user has consumed (use carefully to correct occasional off-by-one caused by race condition)
 --addgearsize <gearsize>
   Add gearsize to the capability for this rhlogin user
 --removegearsize <gearsize>
   Remove gearsize from the capability for this rhlogin user
 -h|--help
   Show Usage info

Examples:
 List the current user settings with:
   oo-admin-ctl-user -l [email protected]

 Set the maximum number of gears a user is allowed to use with:
   oo-admin-ctl-user -l [email protected] --setmaxgears 10

5.1.7. oo-admin-ctl-domain

Used to query and control a user’s domain (AKA namespace). This reports basically all of the information there is to know about the user’s properties in OpenShift, as well as allowing domain and sshkey updates.

 # oo-admin-ctl-domain -h

 == Synopsis

 oo-admin-ctl-domain: Manage user domains

 == Usage

 oo-admin-ctl-domain OPTIONS

 Options:
 -l|--rhlogin <rhlogin>
    Red Hat login (RHN or OpenShift login with OpenShift access) (required)
 -n|--namespace <Namespace>
    Namespace for application(s) (alphanumeric - max 16 chars) (required)
 -c|--command (create|update|delete|info)
 -s|--ssh_key <ssh key>
    Users SSH key
 -t|--key_type <ssh key type>
    User's SSH key type (e.g. ssh-rsa|ssh-dss)
 -k|--key_name <ssh key name>
    Users SSH key name
 -h|--help:
    Show Usage info

The "info" command is default if no other is provided. The output is very detailed YAML.

5.1.8. oo-admin-ctl-app

Used to administratively run commands against an app.

# oo-admin-ctl-app -h
== Synopsis

oo-admin-ctl-app: Control user applications

== Usage

oo-admin-ctl-app OPTIONS

Options:
-l|--rhlogin <rhlogin>
   Red Hat login (RHN or OpenShift login with OpenShift access) (required)
-a|--app     <application>
   Application name  (alphanumeric) (required)
-c|--command <command>
   (start|stop|force-stop|restart|status|destroy|force-destroy) (required)
-b|--bypass
   Ignore warnings
-h|--help
   Show Usage info

5.2. Node Host Tools

These are installed on node hosts with the openshift-origin-node-util RPM.

This package contains management commands that run on a node. Nodes do not have any access to other nodes or to brokers, so all tools here only affect local operations.

5.2.1. oo-accept-node

This script checks that node setup is valid and functional and its gears are in good condition. It is run without options on a node.

If there are no errors, it simply prints "PASS" and exits with return code 0.

If there are errors, they are printed, and the return code is the number of errors. Here are the items that it checks (can be used with -v to show these details; otherwise you see just errors and end result):

# oo-accept-node -v
INFO: loading node configuration file /etc/openshift/node.conf
INFO: loading resource limit file /etc/openshift/resource_limits.conf
INFO: checking selinux status
INFO: checking selinux origin policy
INFO: checking selinux booleans
INFO: checking package list
INFO: checking services
INFO: checking kernel semaphores >= 512
INFO: checking cgroups configuration
INFO: checking presence of /cgroup
INFO: checking presence of /cgroup/all
INFO: checking presence of /cgroup/all/openshift
INFO: checking filesystem quotas
INFO: checking quota db file selinux label
INFO: checking 54 user accounts
INFO: checking application dirs
INFO: checking system httpd configs
PASS

This is a good monitoring script to make sure nothing has gone wrong with a node host.

5.2.2. oo-idler-stats

Good overview of gear statistics in general (not necessarily related to idling). No options, just returns a single line of stats about the gears on the node.

# oo-idler-stats -h
Usage: oo-idler-stats [options]

Options:
 -h, --help     show this help message and exit
 -v, --verbose  Print additional details.
 --validate     Perform additional sanity checks.

5.2.3. oo-admin-ctl-gears

Node system script for stopping/starting gears on a node. This is used by the openshift-gears service at boot time to activate existing gears. It can also be used directly by an administrator.

Usage is like a service script:

 oo-admin-ctl-gears {startall|stopall|status|restartall|condrestartall|startgear|stopgear|restartgear|list}
   list: simply all gears on the node.
   status: shows status of all gears on the node.
   startall: starts all gears, one by one.
   stopall: stops all gears, one by one.
   restartall: restarts all gears, one by one (NOT the same as stopall/startall)
   condrestartall: like restartall, but uses a lockfile to keep from being run concurrently with another instance of itself.
   startgear X: starts individual gear X
   stopgear X: stops individual gear X

The idler is a tool for shutting down gears that haven’t been used recently in order to reclaim their resources and overcommit the node host’s resources.

oo-idler and oo-restorer

These are the basic tools for idling and restoring a gear.

oo-idler stops the application, forwards the application’s URL to a /var/www/html/restorer.php, and records the application’s status as idled.

 Usage: /usr/bin/oo-idler
 -u uuid idles the gear
 -l lists all idled gears on a node
 -n idles a gear without restarting the node's httpd process. This is useful when idling a number of gears (if you build your own auto-idler); make all calls except the last with -n, and then remove -n on the last call to restart httpd.

oo-restorer is what restorer.php calls to start the gear when access is made. It can also be run manually.

# oo-restorer
Usage: /usr/sbin/oo-restorer
 -u UUID  (app to restore UUID)

restorer.php currently relies on oddjob to restart a gear; normally a web request would be in the wrong SELinux context to restart a gear and httpd, so oddjob is used to send a request to oo-restorer so that the restore can be performed from the right context. Restoring will not work if the oddjobd and messagebus services are not running.

oo-last-access, oo-autoidler

These tools enable automatic idling of stale gears.

  • oo-last-access is used to record in the gear operations directory how long it has been since the last access either via the web or git. This should be run regularly in a cron job.

  • oo-autoidler retrieves a list of stale gears and run oo-idler on all of them to make them idle. It should be run regularly from a cron job.

An example auto-idler cron script might look like:

# run the last-access compiler hourly
0 * * * * /usr/bin/oo-last-access > /var/lib/openshift/last_access.log 2>&1
# run the auto-idler twice daily and idle anything stale for 5 days
30 7,19 * * * /usr/bin/oo-autoidler 5