Overview
Platform as a Service is changing the way developers approach developing software. Developers typically use a local sandbox with their preferred application server and only deploy locally on that instance. For instance, developers typically start JBoss EAP locally using the startup.sh command and drop their .war or .ear file in the deployment directory and they are done. Developers have a hard time understanding why deploying to the production infrastructure is such a time consuming process.
System Administrators understand the complexity of not only deploying the code, but procuring, provisioning and maintaining a production level system. They need to stay up to date on the latest security patches and errata, ensure the firewall is properly configured, maintain a consistent and reliable backup and restore plan, monitor the application and servers for CPU load, disk IO, HTTP requests, etc.
OpenShift Origin provides developers and IT organizations an open source auto-scaling cloud application platform for quickly deploying new applications on secure and scalable resources with minimal configuration and management headaches. This means increased developer productivity and a faster pace in which IT can support innovation.
The Comprehensive Deployment Guide
This guide goes into excruciating details about deploying OpenShift Origin. You will become wise in the ways of OpenShift if you choose this path. However, if you are looking for a faster way to get up and running, consider the Puppet-based deployment or the pre-built OpenShift Origin virtual machine.
Getting up and Running with OpenShift Origin
OpenShift Origin is "infrastructure agnostic". That means that you can run OpenShift on bare metal, virtualized instances, or on public/private cloud instances. The only thing that is required is Red Hat Enterprise Linux or CentOS as the underlying operating system. We require this in order to take advantage of SELinux so that you can ensure your installation is rock solid and secure.
What does this mean? This means that in order to take advantage of OpenShift Origin, you can use any existing resources that you have in your hardware pool today. It doesn’t matter if your infrastructure is based on EC2, VMware, RHEV, Rackspace, OpenStack, CloudStack, or even bare metal as long as your CPUs are 64 bit processors.
Many possible configurations
This document covers one possible OpenShift topology, specifically:
-
All necessary services on one host
-
Hosted applications on another host
This is a good reference configuration for proof-of-concept. However, many other topologies and combinations of platforms are supported. At a minimum, a High Availability production installation of OpenShift Origin would include four hosts:
-
Broker, MongoDB, ActiveMQ replicated across three hosts
-
Node system for hosted applications
If you are interested in an HA deployment, refer to the High Availability Deployments section for some guidance.
For any other help with your specific setup, you can ask the OpenShift team at IRC channel #openshift-dev on FreeNode, or check out the OpenShift developers mailing list.
This document assumes that you have a working knowledge of SSH, git, and yum, and are familiar with a Linux-based text editor like vi or emacs. Additionally, you will need to be able to install / and or administer the systems described in the next section.
Installation Prerequisites
Before OpenShift Origin can be installed, the following services must be available in your network:
-
DNS
-
MongoDB
-
ActiveMQ
And the hosts (or nodes) in your system must have the following clients installed:
-
NTP
-
MCollective
This document includes chapters on how to install and configure these services and clients on a single host, along with the OpenShift Origin Broker component. However, in a production environment these services may already be in place, and it may not be necessary to modify them.
Electronic version of this document
This document is available online at http://openshift.github.io/documentation/oo_deployment_guide_comprehensive.html
1. Prerequisite: Preparing the Host Systems
The following steps are required for both Broker and Node hosts.
1.1. OpenShift Origin repositories
Configure the openshift-dependencies RPM repository:
cat <<EOF> /etc/yum.repos.d/openshift-origin-deps.repo [openshift-origin-deps] name=openshift-origin-deps baseurl=https://mirror.openshift.com/pub/origin-server/release/4/rhel-6/dependencies/x86_64/ gpgcheck=0 enabled=1 EOF
Configure the openshift-origin RPM repository:
cat <<EOF> /etc/yum.repos.d/openshift-origin.repo [openshift-origin] name=openshift-origin baseurl=https://mirror.openshift.com/pub/origin-server/release/4/rhel-6/packages/x86_64/ gpgcheck=0 enabled=1 EOF
1.1.1. EPEL Repository
Install the latest epel-release package.
yum install -y --nogpgcheck ${url_of_the_latest_epel-release_rpm}
Update the EPEL repository definition to exclude mcollective and nodejs packages as these are provided in the origin dependencies.
[epel] name=Extra Packages for Enterprise Linux 6 - $basearch #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch exclude=*passenger* nodejs* (1) failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
1 | Manually add this line |
1.2. Updates and NTP
The hosts should be running the latest "minimal install" packages, and should be configured to use NTP for clock synchronization.
1.2.1. Update the Operating System
Server used:
-
broker host
Tools used:
-
SSH
-
yum
First, you need to update the operating system to have all of the latest packages that may be in the yum repository for Fedora. This is important to ensure that you have a recent update to the SELinux packages that OpenShift Origin relies on. In order to update your system, issue the following command:
yum clean all yum -y update
Depending on your connection and speed of your broker host, this installation make take several minutes. |
1.2.2. Configure the Clock to Avoid Time Skew
Server used:
-
broker host
Tools used:
-
SSH
-
ntpdate
OpenShift Origin requires NTP to synchronize the system and hardware clocks. This synchronization is necessary for communication between the broker and node hosts; if the clocks are too far out of synchronization, MCollective will drop messages. Every MCollective request (discussed in a later chapter) includes a time stamp, provided by the sending host’s clock. If a sender’s clock is substantially behind a recipient’s clock, the recipient drops the message. This is often referred to as clock skew and is a common problem that users encounter when they fail to sync all of the system clocks.
yum install -y ntpdate ntp ntpdate clock.redhat.com chkconfig ntpd on service ntpd start
1.2.3. Firewall setup
Uninstall firewalld and install lokkit to manage firewall rules.
yum erase -y firewalld yum install -y lokkit
1.2.4. Setting up the Ruby Environment (Software Collections)
OpenShift makes use of Software Collections (SCLs) to enable alternate versions of some software packages. The specific packages that OpenShift requires are already expressed as RPM dependencies, so it should not be necessary for you to explicitly install SCLs on your host.
Additionally, most of the oo- utilities that are part of the OpenShift system use wrappers to invoke these collections without any additional effort on the part of the user. However, if you attempt to run any of the embedded ruby utilities directly (like for rake
tasks), you will need to explicitly invoke the SCL environment:
scl enable ruby193 v8314 "<your command here>"
2. Preparing the Broker Host System
The broker acts as the central dispatcher in your OpenShift Origin service. Before you can run the broker, you need to prepare the broker host according to these instructions.
2.1. DNS
At a typical network site, a DNS infrastructure will already be in place. However, this section describes a known good DNS configuration that will ensure that name resolution works properly.
Server used:
-
broker host
Tools used:
-
SSH
-
BIND
-
text editor (vi, emacs, nano, etc.)
-
environment variables
-
SELinux
-
Commands:
cat
,echo
,chown
,dnssec-keygen
,rndc-confgen
,restorecon
,chmod
,lokkit
,chkconfig
,service
,nsupdate
,ping
,dig
2.1.1. Install the BIND DNS Server
In order for OpenShift Origin to work correctly, you will need to configure BIND so that you have a DNS server setup.
In OpenShift Origin, name resolution is used primarily for communication between our broker and node hosts. It is additionally used for dynamically updating the DNS server to resolve gear application names when we start creating application gears.
To proceed, ensure that bind and the bind utilities have been installed on the broker host:
yum install -y bind bind-utils
2.1.2. Create DNS environment variables and a DNSSEC key file
OpenShift recommends that you set an environment variable for the domain name that you will be using to facilitate faster configuration of BIND. This section describes the process of setting that up.
First, run this command, replacing "example.com" with your domain name. This sets the bash environment variable named "$domain" to your domain:
domain=example.com
DNSSEC, which stands for DNS Security Extensions, is a method by which DNS servers can verify that DNS data is coming from the correct place. You create a private/public key pair to determine the authenticity of the source domain name server. In order to implement DNSSEC on your new PaaS, you need to create a key file, which will be stored in /var/named. For convenience, set the "$keyfile" variable now to the location of the this key file:
keyfile=/var/named/${domain}.key
Now create a DNSSEC key pair and store the private key in a variable named "$KEY" by using the following commands:
pushd /var/named rm K${domain}* dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${domain} KEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)" popd
Verify that the key was created properly by viewing the contents of the $KEY variable:
echo $KEY
You must also create an rndc key, which will be used by the init script to query the status of BIND when you run service named status:
rndc-confgen -a -r /dev/urandom
Configure the ownership, permissions, and SELinux contexts for the keys that you’ve created:
restorecon -v /etc/rndc.* /etc/named.* chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key
2.1.3. Create a forwarders.conf file for host name resolution
The DNS forwarding facility of BIND can be used to create a large site-wide cache on a few servers, reducing traffic over links to external name servers. It can also be used to allow queries by servers that do not have direct access to the Internet, but wish to look up exterior names anyway. Forwarding occurs only on those queries for which the server is not authoritative and does not have the answer in its cache.
Create the forwarders.conf file with the following commands:
echo "forwarders { 8.8.8.8; 8.8.4.4; } ;" >> /var/named/forwarders.conf restorecon -v /var/named/forwarders.conf chmod -v 640 /var/named/forwarders.conf
2.1.4. Configure subdomain resolution and create an initial DNS database
To ensure that you are starting with a clean /var/named/dynamic directory, remove this directory if it exists:
rm -rvf /var/named/dynamic mkdir -vp /var/named/dynamic
Issue the following command to create the ${domain}.db file (before running this command, verify that the $domain variable that you set earlier is still available):
cat <<EOF > /var/named/dynamic/${domain}.db \$ORIGIN . \$TTL 1 ; 1 seconds (for testing only) ${domain} IN SOA ns1.${domain}. hostmaster.${domain}. ( 2011112904 ; serial 60 ; refresh (1 minute) 15 ; retry (15 seconds) 1800 ; expire (30 minutes) 10 ; minimum (10 seconds) ) NS ns1.${domain}. MX 10 mail.${domain}. \$ORIGIN ${domain}. ns1 A 127.0.0.1 EOF
Once you have entered the above echo command, cat the contents of the file to ensure that the command was successful:
cat /var/named/dynamic/${domain}.db
You should see the following output:
$ORIGIN . $TTL 1 ; 1 second example.com IN SOA ns1.example.com. hostmaster.example.com. ( 2011112916 ; serial 60 ; refresh (1 minute) 15 ; retry (15 seconds) 1800 ; expire (30 minutes) 10 ; minimum (10 seconds) ) NS ns1.example.com. MX 10 mail.example.com. $ORIGIN example.com. ns1 A 127.0.0.1
Now we need to install the DNSSEC key for our domain:
cat <<EOF > /var/named/${domain}.key key ${domain} { algorithm HMAC-MD5; secret "${KEY}"; }; EOF
Set the correct permissions and contexts:
chown -Rv named:named /var/named restorecon -rv /var/named
2.1.5. Create the named configuration file
You will also need to create the named.conf file. Before running the following command, verify that the $domain variable that you set earlier is still available.
cat <<EOF > /etc/named.conf // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; // set forwarding to the next nearest server (from DHCP response forward only; include "forwarders.conf"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; // use the default rndc key include "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; include "/etc/named.rfc1912.zones"; include "${domain}.key"; zone "${domain}" IN { type master; file "dynamic/${domain}.db"; allow-update { key ${domain} ; } ; }; EOF
Finally, set the permissions for the new configuration file that you just created:
chown -v root:named /etc/named.conf restorecon /etc/named.conf
2.1.6. Start the named service
Now you are ready to start up your new DNS server and add some updates.
service named start
You should see a confirmation message that the service was started correctly. If you do not see an OK message, run through the above steps again and ensure that the output of each command matches the contents of this document. If you are still having trouble after trying the steps again, refer to your help options.
2.1.7. Configure host name resolution to use new the BIND server
Now you need to update the resolv.conf file to use the local named service that you just installed and configured. Open up your /etc/resolv.conf file and add the following entry as the first nameserver entry in the file:
nameserver 127.0.0.1
We also need to make sure that named starts on boot and that the firewall is configured to pass through DNS traffic:
lokkit --service=dns chkconfig named on
If you get unknown locale error when running lokkit, consult the troubleshooting section at the end of this manual. |
2.2. Add the Broker Host to DNS
If you configured and started a BIND server per this document, or you are working against a BIND server that was already in place, you now need to add a record for your broker node (or host) to BIND’s database. To accomplish this task, you will use the nsupdate
command, which opens an interactive shell. Replace "broker.example.com" with your preferred hostname and "10.0.0.1" with the ip address of the broker:
# nsupdate -k $keyfile server 127.0.0.1 update add broker.example.com 180 A 10.0.0.1 send
Press <ctrl-D> to exit from the interactive session.
In order to verify that you have successfully added your broker node to your DNS server, you can perform:
ping broker.example.com
and it should resolve to the local machine that you are working on. You can also perform a dig request using the following command:
dig @127.0.0.1 broker.example.com
2.3. DHCP Client and Hostname
Server used:
-
broker host
Tools used:
-
text editor
-
Commands: hostname
2.3.1. Create dhclient-eth0.conf
In order to configure your broker host to use a specific DNS server, you will need to edit the /etc/dhcp/dhclient-\{$network device}.conf file or create the file if it does not exist. Without this step, the DNS server information in /etc/resolv.conf would default back the server returned from your DHCP server on the next boot of the server.
For example, if you are using eth0 as your default Ethernet device, you would need to edit the following file:
/etc/dhcp/dhclient-eth0.conf
If you are unsure of which network device that your system is using, you can issue the ifconfig command to list all available network devices for your machine.
the lo device is the loopback device and is not the one you are looking for. |
Once you have the correct file opened, add the following information making sure to substitute the IP address of the broker host:
prepend domain-name-servers 10.4.59.x; supersede host-name "broker"; supersede domain-name "example.com";
Ensure that you do not have any typos. Command errors include forgetting a semicolon, putting in the node’s IP address instead of the broker’s, or typing "server" instead of "servers."
2.3.2. Update network configuration
Update your network scripts to use the DNS server. Update /etc/sysconfig/network-scripts/ifcfg-<eth device> file and add the following information making sure to substitute the IP address of the broker host:
PEERDNS="no" DNS1=10.4.59.x
2.3.3. Set the host name for your server
You need to set the hostname of your broker host. We need to change this to reflect the new hostname that we are going to apply to this server. For this chapter, we will be using broker.example.com.
In order to accomplish this task, edit the /etc/sysconfig/network file and locate the section labeled HOSTNAME. The line that you want to replace should look like this:
HOSTNAME=localhost.localdomain
Change the /etc/sysconfig/network file to reflect the following change:
HOSTNAME=broker.example.com
Now that we have configured our hostname, we also need to set it for our current session by using the following command:
hostname broker.example.com
3. Prerequisite: MongoDB
Server used:
-
broker host
Tools used:
-
text editor
-
yum
-
mongo
-
chkconfig
-
service
-
lokkit
OpenShift Origin makes heavy use of MongoDB for storing internal information about users, gears, and other necessary items. If you are not familiar with MongoDB, you can read up on it at the official MongoDB site (http://www.mongodb.org). For the purpose of OpenShift Origin, you need to know that MongoDB is a document data storage system that uses JavaScript for the command syntax and stores all documents in a JSON format.
3.1. Install the mongod server
In order to use MongoDB, you will need to install the mongod server:
yum install -y mongodb-server mongodb libmongodb
This will pull in a number of RPM dependencies and install them automatically.
3.2. Configure mongod
MongoDB uses a configuration file for its settings. This file can be found at /etc/mongodb.conf. You will need to make a few changes to this file to ensure that MongoDB handles authentication correctly and that is enabled to use small files.
3.2.1. Setup MongoDB smallfiles option
By default, this line is commented out so just remove the hash mark (#) at the beginning of the line to enable the setting. To enable small files support, add the following line:
smallfiles=true
Setting smallfiles=true configures MongoDB not to pre-allocate a huge database, which wastes a surprising amount of time and disk space and is unnecessary for the comparatively small amount of data that the broker will store in it. It is not absolutely necessary to set smallfiles=true. For a new installation it save a minute or two of initialization time and saves a fair amount of disk space.
3.2.2. Setup MongoDB authentication
To set up MongoDB, first ensure that auth is turned off in the /etc/mongodb.conf file. Edit the file and ensure that auth=true is commented out.
#auth=true
Start the MongoDB server so that we can run commands against the server.
service mongod start
Create the OpenShift broker user.
/usr/bin/mongo localhost/openshift_broker_dev --eval 'db.addUser("openshift", "<choose a password>")' /usr/bin/mongo localhost/admin --eval 'db.addUser("openshift", "<password chosen above>")'
Stop the MongoDB server so that we can continue with other configuration.
service mongod stop
Edit the configuration file and ensure the two following conditions are set correctly:
auth=true
3.3. Firewall setup
If MongoDB is setup on a machine that is not running the broker, you will need to ensure that the MongoDB is configured to listen on the external IP and that the firewall allows MongoDB connections to pass-through.
Edit the mongodb.conf file and update the bind_ip setting.
bind_ip=127.0.0.1,10.4.59.x
Enable MongoDB access on the firewall.
lokkit --port=27017:tcp
3.4. Set mongod to Start on Boot
MongoDB is an essential part of the OpenShift Origin platform. Because of this, you must ensure that mongod is configured to start on system boot:
chkconfig mongod on
By default, when you install mongod via the yum command, the service is not started. You can verify this with the following:
service mongod status
This should return "mongod is stopped". In order to start the service, simply issue:
service mongod start
Now verify that mongod was installed and configured correctly. To do this, use the mongo
shell client tool. If you are familiar with MySQL or Postgres, this is similar to the mysql client’s interactive SQL shell. However, because MongoDB is a NoSQL database, it does not respond to traditional SQL-style commands.
In order to start the mongo shell, enter the following command:
mongo admin
You should see a confirmation message that you are using MongoDB shell version: x.x.x and that you are connecting to the admin database. Authenticate against the database with the user you created above.
db.auth('openshift',"<password chosen above>")
To verify even further, you can list all of the available databases that the database currently has:
show dbs
You will then be presented with a list of valid databases that are currently available to the mongod service.
admin 0.0625GB local 0.03125GB openshift_broker_dev 0.0625GB
To exit the Mongo shell, you can simply type exit:
exit
4. Prerequisite: ActiveMQ
ActiveMQ is a fully open source messenger service that is available for use across many different programming languages and environments. OpenShift Origin makes use of this technology to handle communications between the broker host and the node hosts in the deployment. In order to make use of this messaging service, you need to install and configure ActiveMQ on your broker node.
Server used:
-
broker host
Tools used:
-
text editor
-
yum
-
wget
-
lokkit
-
chkconfig
-
service
4.1. Installation
Installing ActiveMQ on Fedora is a fairly easy process as the packages are included in the rpm repositories that are already configured on your broker node. You need to install both the server and client packages by using the following command:
yum install -y activemq activemq-client
This will also install all of the dependencies required for the packages if they aren’t already installed. Notably, Java 1.6 and the libraries for use with the Ruby programming language may be installed. |
4.2. Configuration
ActiveMQ uses an XML configuration file that is located at /etc/activemq/activemq.xml. This installation guide is accompanied by a template version of activemq.xml that you can use to replace this file. But first: back up the original file:
cd /etc/activemq mv activemq.xml activemq.orig
Copy the basic configuration template in to /etc/activemq/activemq.xml.
curl -o /etc/activemq/activemq.xml <link to template above>
Copy the jetty template in to /etc/activemq/jetty.xml.
curl -o /etc/activemq/jetty.xml <link to template above>
Copy the jetty auth template in to /etc/activemq/jetty-realm.properties.
curl -o /etc/activemq/jetty-realm.properties <link to template above>
Once you have the configuration template in place, you will need to make a few minor changes to the configuration.
First, replace the hostname provided (activemq.example.com) to the FQDN of your broker host. For example, the following line:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq.example.com" dataDirectory="${activemq.data}">
Should become:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="<your broker name>" dataDirectory="${activemq.data}">
The $\{activemq.data} text should be entered as stated as it does not refer to a shell variable |
The second change is to provide your own credentials for authentication. The authentication information is stored inside of the __ block of code. Make the changes that you desire to the following code block:
<simpleAuthenticationPlugin> <users> <authenticationUser username="mcollective" password="marionette" groups="mcollective,everyone"/> <authenticationUser username="admin" password="<choose a password>" groups="mcollective,admin,everyone"/> </users> </simpleAuthenticationPlugin>
Next modify the /etc/activemq/jetty-realm.properties and set a password for the admin user
admin: [password], admin
4.3. Firewall Rules / Start on Boot
The broker host firewall rules must be adjusted to allow MCollective to communicate on port 61613:
lokkit --port=61613:tcp
Finally, you need to enable the ActiveMQ service to start on boot as well as start the service for the first time.
chkconfig activemq on service activemq start
4.4. Verify that ActiveMQ is Working
Now that ActiveMQ has been installed, configured, and started, verify that the web console is working as expected. The ActiveMQ web console should be running and listening on port 8161. In order to verify that everything worked correctly, load the following URL in a web browser:
http://localhost:8161
Under the provided configuration instructions, the ActiveMQ console is only available on the localhost. If you want to be able to connect to it via HTTP remotely, you will need to either:
|
For example, the following command adds a rule to your firewall to allow connections to the ActiveMQ console.
Execute the following on the ActiveMQ host.
lokkit --port=8161:tcp
Alternatively, the following command creates a SSH tunnel, so that if you connect to port 8161 on your local host, the connection will be forwarded to port 8161 on the remote host, where the the ActiveMQ console is listening.
Execute the following on your local machine.
# ssh -f -N -L 8161:localhost:8161 [email protected]
Note: Use the username and password from the jetty-realm.properties file to log into the console.
5. Prerequisite: MCollective client
Server used:
-
broker host
Tools used:
-
text editor
-
yum
For communication between the broker host and the gear nodes, OpenShift Origin uses MCollective. You may be wondering how MCollective is different from ActiveMQ. ActiveMQ is the messenger server that provides a queue of transport messages. You can think of MCollective as the client that actually sends and receives those messages. For example, if we want to create a new gear on an OpenShift Origin node, MCollective would receive the "create gear" message from ActiveMQ and perform the operation.
5.1. Installation
In order to use MCollective, first install it via yum:
yum install -y ruby193-mcollective-client
5.2. Configuration
Replace the contents of the /opt/rh/ruby193/root/etc/mcollective/client.cfg with the following information:
cat <<EOF > /opt/rh/ruby193/root/etc/mcollective/client.cfg main_collective = mcollective collectives = mcollective libdir = /opt/rh/ruby193/root/usr/libexec/mcollective logfile = /var/log/openshift/broker/ruby193-mcollective-client.log loglevel = debug # Plugins securityprovider = psk plugin.psk = unset connector = activemq plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host = broker.example.com plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = mcollective plugin.activemq.pool.1.password = marionette plugin.activemq.stomp_1_0_fallback = 0 plugin.activemq.heartbeat_interval = 30 plugin.activemq.max_hbread_fails = 2 plugin.activemq.max_hbrlck_fails = 2 EOF
Update the plugin.activemq.pool.1.password password to match what you set up in the active configuration.
Now you have configured the MCollective client to connect to ActiveMQ running on the local host. In a typical deployment, you will configure MCollective to connect to ActiveMQ running on a remote server by putting the appropriate hostname for the plugin.activemq.pool.1.host setting.
6. The Broker
Server used:
-
broker host
Tools used:
-
text editor
-
yum
-
sed
-
chkconfig
-
lokkit
-
openssl
-
ssh-keygen
-
fixfiles
-
restorecon
6.1. Install Necessary Packages
In order for users to interact with the OpenShift Origin platform, they will typically use client tools or the web console. These tools communicate with the broker via a REST API that is also accessible for writing third party applications and tools. In order to use the broker application, we need to install several packages from the OpenShift Origin repository.
yum install -y openshift-origin-broker openshift-origin-broker-util \ rubygem-openshift-origin-auth-remote-user \ rubygem-openshift-origin-auth-mongo \ rubygem-openshift-origin-msg-broker-mcollective \ rubygem-openshift-origin-dns-avahi \ rubygem-openshift-origin-dns-nsupdate \ rubygem-openshift-origin-dns-route53 \ ruby193-rubygem-passenger ruby193-mod_passenger
Depending on your connection and speed of your broker host, this installation make take several minutes. |
6.2. Configure the Firewall and Enable Service at Boot
The broker application requires a number of services to be running in order to function properly. Configure them to start at boot time:
chkconfig network on chkconfig sshd on
Additionally, modify the firewall rules to ensure that the traffic for these services is accepted:
lokkit --service=ssh lokkit --service=https lokkit --service=http
6.3. Generate access keys
Now you will need to generate access keys that will allow some of the services (Jenkins for example) to communicate to the broker.
openssl genrsa -out /etc/openshift/server_priv.pem 2048 openssl rsa -in /etc/openshift/server_priv.pem -pubout > /etc/openshift/server_pub.pem
You will also need to generate a ssh key pair to allow communication between the broker host and any nodes that you have configured. For example, the broker host will use this key in order to transfer data between nodes when migrating a gear from one node host to another.
Remember, the broker host is the director of communications and the node hosts actually contain all of the application gears that your users create. |
In order to generate this SSH keypair, perform the following commands:
ssh-keygen -t rsa -b 2048 -f ~/.ssh/rsync_id_rsa
Press <enter> for the passphrase. This generates a passwordless key which is convenient for machine-to-machine authentication but is inherently less secure than other alternatives. Finally, copy the private and public key files to the openshift directory:
cp ~/.ssh/rsync_id_rsa* /etc/openshift/
Later, during configuration of the node hosts, you will copy this newly created key to each node host.
6.4. Configure SELinux
SELinux has several variables that we want to ensure are set correctly. These variables include the following:
Variable Name | Description |
---|---|
httpd_unified |
Allow the broker to write files in the "http" file context |
httpd_can_network_connect |
Allow the broker application to access the network |
httpd_can_network_relay |
Allow the SSL termination Apache instance to access the backend Broker application |
httpd_run_stickshift |
Enable passenger-related permissions |
named_write_master_zones |
Allow the broker application to configure DNS |
allow_ypbind |
Allow the broker application to use ypbind to communicate directly with the name server |
httpd_verify_dns |
Allow Apache to query NS records |
httpd_enable_homedirs |
Allows Apache to use its access home directories |
httpd_execmem |
allows httpd to execute programs that require memory addresses that are both executable and writable |
httpd_read_user_content |
Allows httpd to read user generated content |
In order to set all of these variables correctly, enter the following:
setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on \ httpd_run_stickshift=on named_write_master_zones=on allow_ypbind=on \ httpd_verify_dns=on httpd_enable_homedirs=on httpd_execmem=on \ httpd_read_user_content=on
You will also need to set several files and directories with the proper SELinux contexts. Issue the following commands:
( echo fcontext -a -t httpd_var_run_t '/var/www/openshift/broker/httpd/run(/.*)?' echo fcontext -a -t httpd_tmp_t '/var/www/openshift/broker/tmp(/.*)?' echo fcontext -a -t httpd_log_t '/var/log/openshift/broker(/.*)?' ) | semanage -i - chcon -R -t httpd_log_t /var/log/openshift/broker chcon -R -t httpd_tmp_t /var/www/openshift/broker/httpd/run chcon -R -t httpd_var_run_t /var/www/openshift/broker/httpd/run fixfiles -R ruby193-rubygem-passenger restore fixfiles -R ruby193-mod_passenger restore fixfiles -R rubygem-passenger restore fixfiles -R mod_passenger restore restorecon -rv /var/run restorecon -rv /opt restorecon -rv /var/www/openshift/broker/tmp restorecon -v '/var/log/openshift/broker/user_action.log'
The fixfiles command updates SELinux’s database that associates pathnames with SELinux contexts. The restorecon command uses this database to update the SELinux contexts of the specified files on the file system itself so that those contexts will be in effect when the kernel enforces policy. See the manual pages of the fixfiles and restorecon commands for further details.
6.5. Understand and Change the Broker Configuration
6.5.1. Gear Sizes
The OpenShift Origin broker uses a configuration file to define several of the attributes for controlling how the platform as a service works. This configuration file is located at /etc/openshift/broker.conf. For instance, the complete set of valid gear types that a user can create are defined using the VALID_GEAR_SIZES variable.
# Comma separated list of valid gear sizes VALID_GEAR_SIZES="small,medium,large"
The related DEFAULT_GEAR_CAPABILITIES variable dictates which of these is available to a user when the user is created:
# Default gear sizes (comma-separated) allowed to a new user DEFAULT_GEAR_CAPABILITIES="small,medium"
And finally the DEFAULT_GEAR_SIZE variable determines which size gear a new application will get by default:
# Default gear size for a new gear DEFAULT_GEAR_SIZE="small"
6.5.2. Cloud Domain
Edit this file and ensure that the CLOUD_DOMAIN variable is set to correctly reflect the domain that you are using to configure this deployment of OpenShift Origin.
# Domain suffix to use for applications (Must match node config) CLOUD_DOMAIN="example.com"
6.5.3. MongoDB settings
Edit the mongo variables to connect to the Mongo DB server
# Comma separated list of replica set servers. Eg: "<host-1>:<port-1>,<host-2>:<port-2>,..." MONGO_HOST_PORT="<mongodb server FQDN>:27017" #Mongo DB user configured earlier MONGO_USER="openshift" #Password for user configured earlier MONGO_PASSWORD="<password used in MongoDB section>" #Broker metadata database MONGO_DB="openshift_broker_dev"
6.5.4. Authentication Salt
Generate some random bits which we will use for the broker auth salt.
openssl rand -base64 64
Output from this command should look like:
ds+R5kYI5Jvr0uanclmkavrXBSl0KQ34y3Uw4HrsiUNaKjYjgN/tVxV5mYPukpFR radl1SiQ5lmr41zDo4QQww==
Copy this value and set the AUTH_SALT variable in the /etc/openshift/broker.conf file.
AUTH_SALT="ds+R5kYI5Jvr0uanclmkavrXBSl0KQ34y3Uw4HrsiUNaKjYjgN/tVxV5mYPukpFRradl1SiQ5lmr41zDo4QQww=="
Note: If you are setting up a multi-broker infrastructure, the authentication salt must be the same on all brokers.
6.5.5. Session Secret
Generate some random bits which we will use for the broker session secret.
openssl rand -base64 64
Copy this value and set the SESSION_SECRET variable in the /etc/openshift/broker.conf file.
SESSION_SECRET="rFeKpEGI0TlTECvLgBPDjHOS9ED6KpztUubaZFvrOm4tJR8Gv0poVWj77i0hqDj2j1ttWTLiCIPRtuAfxV1ILg=="
Note: If you are setting up a multi-broker infrastructure, the session secret must be the same on all brokers.
While you are in this file, you can change any other settings that need to be configured for your specific installation.
7. Broker Plugins
Server used:
-
broker host
Tools used:
-
text editor
-
cat
-
echo
-
environment variables
-
pushd
-
semodule
-
htpasswd
-
mongo
-
bundler
-
chkconfig
-
service
OpenShift Origin uses a plugin system for core system components such as DNS, authentication, and messaging. In order to make use of these plugins, you need to configure them and provide the correct configuration items to ensure that they work correctly. The plugin configuration files are located in the /etc/openshift/plugins.d directory. Begin by changing to that directory:
cd /etc/openshift/plugins.d
Once you are in this directory, you will see that OpenShift Origin provides several example configuration files for you to use to speed up the process of configuring these plugins. You should see three example files.
-
openshift-origin-auth-remote-user.conf.example
-
openshift-origin-dns-nsupdate.conf.example
-
openshift-origin-msg-broker-mcollective.conf.example
7.1. Create Configuration Files
To begin, copy the .example files to actual configuration files that will be used by OpenShift Origin:
cp openshift-origin-auth-remote-user.conf.example openshift-origin-auth-remote-user.conf cp openshift-origin-msg-broker-mcollective.conf.example openshift-origin-msg-broker-mcollective.conf sed -i 's/MCOLLECTIVE_CONFIG.*/MCOLLECTIVE_CONFIG=\/opt\/rh\/ruby193\/root\/etc\/mcollective\/client.cfg/g' /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf
The broker application will check the plugins.d directory for files ending in .conf. The presence of .conf file enables the corresponding plug-in. Thus, for example, copying the openshift-origin-auth-remote-user.conf.example file to openshift-origin-auth-remote-user.conf enables the auth-remote-user plug-in.
7.2. Configure the DNS plugin
If you installed a DNS server on the same host as the broker, you can create a DNS configuration file using the cat
command instead of starting with the example DNS configuration file. You can do that by taking advantage of the $domain
and $keyfile
environment variables that you created during that process. If you no longer have these variables set, you can recreate them with the following commands:
domain=example.com keyfile=/var/named/${domain}.key cd /var/named KEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)"
To verify that your variables were recreated correctly, echo the contents of your keyfile and verify your $KEY variable is set correctly:
cat $keyfile echo $KEY
If you performed the above steps correctly, you should see output similar to this:
key example.com { algorithm HMAC-MD5; secret "3RH8tLp6fvX4RVV9ny2lm0tZpTjXhB62ieC6CN1Fh/2468Z1+6lX4wpCJ6sfYH6u2+//gbDDStDX+aPMtSiNFw=="; };
and
3RH8tLp6fvX4RVV9ny2lm0tZpTjXhB62ieC6CN1Fh/2468Z1+6lX4wpCJ6sfYH6u2+//gbDDStDX+aPMtSiNFw==
Now that you have your variables setup correctly, you can create the openshift-origin-dns-bind.conf file. Ensure that you are still in the /etc/openshift/plugins.d directory and issue the following command:
cd /etc/openshift/plugins.d cat << EOF > openshift-origin-dns-nsupdate.conf BIND_SERVER="127.0.0.1" BIND_PORT=53 BIND_KEYNAME="${domain}" BIND_KEYVALUE="${KEY}" BIND_ZONE="${domain}" EOF
After running this command, cat the contents of the file and ensure they look similar to the following:
BIND_SERVER="127.0.0.1" BIND_PORT=53 BIND_KEYNAME="example.com" BIND_KEYVALUE="3RH8tLp6fvX4RVV9ny2lm0tZpTjXhB62ieC6CN1Fh/2468Z1+6lX4wpCJ6sfYH6u2+//gbDDStDX+aPMtSiNFw==" BIND_ZONE="example.com"
7.3. Configure an Authentication Plugin
OpenShift Origin supports various different authentication systems for authorizing a user. In a production environment, you will probably want to use LDAP, kerberos, or some other enterprise class authorization and authentication system. For this reference system we will use a system called Basic Auth that relies on a htpasswd file to configure authentication. OpenShift Origin provides three example authentication configuration files in the /var/www/openshift/broker/httpd/conf.d/ directory:
Authentication Type | Description |
---|---|
Mongo Auth |
openshift-origin-auth-mongo.conf.sample |
Basic Auth |
openshift-origin-auth-remote-user-basic.conf.sample |
Kerberos |
openshift-origin-auth-remote-user-kerberos.conf.sample |
LDAP |
openshift-origin-auth-remote-user-ldap.conf.sample |
Using Basic Auth, you need to copy the sample configuration file to the actual configuration file:
cp /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user-basic.conf.sample /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf
This configuration file specifies that the AuthUserFile is located at /etc/openshift/htpasswd. At this point, that file doesn’t exist, so you will need to create it and add a user named demo.
htpasswd -c /etc/openshift/htpasswd demo
The -c option to htpasswd creates a new file, overwriting any existing htpasswd file. If your intention is to add a new user to an existing htpasswd file, drop the -c option. |
After entering the above command, you will be prompted for a password for the user demo. Once you have provided that password, view the contents of the htpasswd file to ensure that the user was added correctly. Make a note of the password as you will need it later.
cat /etc/openshift/htpasswd
If the operation was a success, you should see output similar to the following:
demo:$apr1$Q7yO3MF7$rmSZ7SI.vITfEiLtkKSMZ/
7.4. Configure the Administrative Console
The optional OpenShift Origin administrative console (a.k.a. "admin console") enables OpenShift administrators an at-a-glance view of an OpenShift deployment, in order to search and navigate OpenShift entities and make reasonable inferences about adding new capacity.
yum install -y rubygem-openshift-origin-admin-console
The broker will load the plugin if the gem is installed and its configuration
file is placed at /etc/openshift/plugins.d/openshift-origin-admin-console.conf
(or ...-dev.conf
for specifying development-mode settings) which the RPM does
by default. Edit the configuration file as needed; options are commented, and
discussed more fully in the Administration Guide.
7.4.1. Adding to an existing deployment
If you are adding this plugin to an existing deployment, as opposed to during the initial install, you may have some extra steps. As above, install the plugin with:
yum install rubygem-openshift-origin-admin-console
7.4.2. Browsing to the Admin Console
Even when the admin console is included in the broker app, standard broker host httpd proxy configuration does not allow external access to its URI (which you can change in the config file; by default it is /admin-console). This is a security feature to avoid publicly exposing the console by accident.
In order to access the console, you can either forward the server’s port for local viewing or modify the proxy configuration.
Port forwarding
You can view the admin console without exposing it externally by forwarding its port to your local host for viewing with a browser. For instance,
$ ssh -f [email protected] -L 8080:localhost:8080 -N
This connects via ssh to [email protected]
and attaches your local
port 8080 (the first number) to the remote server’s local port 8080, which is
where the broker application is listening behind the host proxy.
Now just browse to http://localhost:8080/admin-console
to view.
7.5. Set Services to Start on Boot
The last step in configuring our broker application is to ensure that all of the necessary services are started and that they are configured to start upon system boot.
chkconfig openshift-broker on
This will ensure that the broker starts upon next system boot. However, you also need to start the broker application to run now.
service openshift-broker start
7.6. Verify the Broker REST API
In order to verify that the REST API is functioning for the broker host, you can use the following curl command:
curl -u <username>:<password> http://localhost:8080/broker/rest/api.json
You should see the following output:
{ "api_version": 1.5, "data": { "API": { "href": "https://broker.example.com/broker/rest/api", "method": "GET", "optional_params": [], "rel": "API entry point", "required_params": [] }, "GET_ENVIRONMENT": { "href": "https://broker.example.com/broker/rest/environment", "method": "GET", "optional_params": [], "rel": "Get environment information", "required_params": [] }, "GET_USER": { "href": "https://broker.example.com/broker/rest/user", "method": "GET", "optional_params": [], "rel": "Get user information", "required_params": [] }, "LIST_DOMAINS": { "href": "https://broker.example.com/broker/rest/domains", "method": "GET", "optional_params": [], "rel": "List domains", "required_params": [] }, "ADD_DOMAIN": { "href": "https://broker.example.com/broker/rest/domains", "method": "POST", "optional_params": [], "rel": "Create new domain", "required_params": [{ "description": "Name of the domain", "invalid_options": [], "name": "id", "type": "string", "valid_options": [] }] }, "LIST_CARTRIDGES": { "href": "https://broker.example.com/broker/rest/cartridges", "method": "GET", "optional_params": [], "rel": "List cartridges", "required_params": [] }, "LIST_AUTHORIZATIONS": { "href": "https://broker.example.com/broker/rest/user/authorizations", "method": "GET", "optional_params": [], "rel": "List authorizations", "required_params": [] }, "SHOW_AUTHORIZATION": { "href": "https://broker.example.com/broker/rest/user/authorization/:id", "method": "GET", "optional_params": [], "rel": "Retrieve authorization :id", "required_params": [{ "description": "Unique identifier of the authorization", "invalid_options": [], "name": ":id", "type": "string", "valid_options": [] }] }, "ADD_AUTHORIZATION": { "href": "https://broker.example.com/broker/rest/user/authorizations", "method": "POST", "optional_params": [{ "default_value": "userinfo", "description": "Select one or more scopes that this authorization will grant access to:\n\n* session\n Grants a client the authority to perform all API actions against your account. Valid for 1 day.\n* read\n Allows the client to access resources you own without making changes. Does not allow access to view authorization tokens. Valid for 1 day.\n* userinfo\n Allows a client to view your login name, unique id, and your user capabilities. Valid for 1 day.", "name": "scope", "type": "string", "valid_options": ["session", "read", "userinfo"] }, { "default_value": null, "description": "A description to remind you what this authorization is for.", "name": "note", "type": "string", "valid_options": [] }, { "default_value": -1, "description": "The number of seconds before this authorization expires. Out of range values will be set to the maximum allowed time.", "name": "expires_in", "type": "integer", "valid_options": [] }, { "default_value": false, "description": "Attempt to locate and reuse an authorization that matches the scope and note and has not yet expired.", "name": "reuse", "type": "boolean", "valid_options": [true, false] }], "rel": "Add new authorization", "required_params": [] }, "LIST_QUICKSTARTS": { "href": "https://broker.example.com/broker/rest/quickstarts", "method": "GET", "optional_params": [], "rel": "List quickstarts", "required_params": [] }, "SHOW_QUICKSTART": { "href": "https://broker.example.com/broker/rest/quickstart/:id", "method": "GET", "optional_params": [], "rel": "Retrieve quickstart with :id", "required_params": [{ "description": "Unique identifier of the quickstart", "invalid_options": [], "name": ":id", "type": "string", "valid_options": [] }] } }, "messages": [], "status": "ok", "supported_api_versions": [1.0, 1.1, 1.2, 1.3, 1.4, 1.5], "type": "links", "version": "1.5" }
7.7. Start apache
Start the apache server on the node to proxy web traffic to the broker.
chkconfig httpd on service httpd start
In order to verify that the REST API is functioning for the broker host, you can use the following curl command:
curl -u <username>:<password> -k https://broker.example.com/broker/rest/api.json
At this point you have a fully functional Broker. In order to work with it, proceed through the Web Console installation.
8. The Web Console
Server used:
-
broker host
Tools used:
-
text editor
-
yum
-
service
-
chkconfig
The OpenShift Origin Web Console is written in Ruby and will provide a graphical user interface for users of the system to create and manage application gears that are deployed on the gear hosts.
8.1. Install the Web Console RPMs
The installation of the web console can be performed with a simple yum install command, but note that it will pull in many dependencies from the Ruby programming language. At the time of this writing, executing the following command installed 77 additional packages.
yum install -y openshift-origin-console
Depending on your connection and speed of your broker host, this installation may take several minutes. |
8.2. Understand and Change the Console Configuration
8.2.1. Session Secret
Generate some random bits which we will use for the broker session secret.
openssl rand -base64 64
Copy this value and set the SESSION_SECRET variable in the /etc/openshift/console.conf file.
SESSION_SECRET="rFeKpEGI0TlTECvLgBPDjHOS9ED6KpztUubaZFvrOm4tJR8Gv0poVWj77i0hqDj2j1ttWTLiCIPRtuAfxV1ILg=="
Note: If you are setting up a multi-console infrastructure, the session secret must be the same on all console servers.
8.3. Configure Authentication for the Console
If you are building the reference configuration described in this document, then you have configured the broker application for Basic Authentication. What you actually configured was authentication for the Broker REST API. The console application uses a separate authentication scheme for authenticating users to the web console. This will enable you to restrict which users you want to have access to the REST API and keep that authentication separate from the web based user console.
The openshift-console package created some sample authentication files for us. These files are located in the /var/www/openshift/console/httpd/conf.d directory. For this reference configuration, you will use the same htpasswd file that you created when you set up authentication for the Broker application. In order to do this, issue the following commands:
cd /var/www/openshift/console/httpd/conf.d cp openshift-origin-auth-remote-user-basic.conf.sample openshift-origin-auth-remote-user-basic.conf
8.3.1. SSL Configuration
The broker proxy configuration for apache adds a VirtualHost configuration to listen on port 443.
This may conflict with the default SSL VirtualHost configuration. Check /etc/httpd/conf.d/ssl.conf and completely remove the configuration if you find one there.
The broker proxy configuration itself can be found in /etc/httpd/conf.d/*openshiftorigin_broker_proxy.conf :
<VirtualHost *:443> # ServerName we will inherit from other config; # ServerAlias is to make sure "localhost" traffic goes here regardless. ServerAlias localhost ServerAdmin root@localhost DocumentRoot /var/www/html RewriteEngine On RewriteRule ^/$ https://%{HTTP_HOST}/console [R,L] SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key RequestHeader set X_FORWARDED_PROTO 'https' RequestHeader set Front-End-Https "On" ProxyTimeout 300 ProxyPass /console http://127.0.0.1:8118/console ProxyPassReverse /console http://127.0.0.1:8118/console ProxyPass /broker http://127.0.0.1:8080/broker ProxyPassReverse /broker http://127.0.0.1:8080/broker ProxyPass /assets http://127.0.0.1:8118/console/assets (1) ProxyPassReverse /assets http://127.0.0.1:8118/console/assets (1) </VirtualHost>
1 | In order for icons to show up correctly on the web console, you should ensure that the VirtualHost definition in that file contains the ProxyPass and ProxyPassReverse lines. |
8.4. Configure SELinux
SELinux has several variables that we want to ensure are set correctly. These variables include the following:
Variable Name | Description |
---|---|
httpd_unified |
Allow the broker to write files in the "http" file context |
httpd_can_network_connect |
Allow the broker application to access the network |
httpd_can_network_relay |
Allow the SSL termination Apache instance to access the backend Broker application |
httpd_run_stickshift |
Enable passenger-related permissions |
named_write_master_zones |
Allow the broker application to configure DNS |
allow_ypbind |
Allow the broker application to use ypbind to communicate directly with the name server |
httpd_verify_dns |
Allow Apache to query NS records |
httpd_enable_homedirs |
Allows Apache to use its access home directories |
httpd_execmem |
allows httpd to execute programs that require memory addresses that are both executable and writable |
httpd_read_user_content |
Allows httpd to read user generated content |
In order to set all of these variables correctly, enter the following:
setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on \ httpd_run_stickshift=on named_write_master_zones=on allow_ypbind=on \ httpd_verify_dns=on httpd_enable_homedirs=on httpd_execmem=on \ httpd_read_user_content=on
You will also need to set several files and directories with the proper SELinux contexts. Issue the following commands:
( echo fcontext -a -t httpd_log_t '/var/log/openshift/console(/.*)?' echo fcontext -a -t httpd_log_t '/var/log/openshift/console/httpd(/.*)?' echo fcontext -a -t httpd_var_run_t '/var/www/openshift/console/httpd/run(/.*)?' ) | semanage -i - fixfiles -R ruby193-rubygem-passenger restore fixfiles -R ruby193-mod_passenger restore fixfiles -R rubygem-passenger restore fixfiles -R mod_passenger restore restorecon -rv /var/run restorecon -rv /opt restorecon -R /var/log/openshift/console restorecon -R /var/www/openshift/console
The fixfiles command updates SELinux’s database that associates pathnames with SELinux contexts. The restorecon command uses this database to update the SELinux contexts of the specified files on the file system itself so that those contexts will be in effect when the kernel enforces policy. See the manual pages of the fixfiles and restorecon commands for further details.
8.5. Set Console to Start on Boot
Start the service and ensure it starts on boot:
chkconfig openshift-console on service openshift-console start
Once completed, the console will prompt the user to provide their login credentials as specified in the /etc/openshift/htpasswd file.
Seeing an error page after authenticating to the console is expected at this point. The web console will not be fully active until you add a node host to the Origin system |
9. The Node Host
Servers used:
-
Node host
-
Broker host
Tools used:
-
text editor
-
yum
-
ntpdate
-
dig
-
oo-register-dns
-
cat
-
scp
-
ssh
9.1. Register a DNS entry for the Node Host
SSH to your broker application host and set a variable that points to your keyfile. The following command should work after you replace "example.com" with the domain that you are going to use.
You can skip this section if you are build a all-in-one environment. |
keyfile=/var/named/example.com.key
In order to configure your DNS to resolve your node host, we need to tell our BIND server about the host. Run the following command and replace the IP address with the correct IP address of your node.
Execute the following on the broker host:
oo-register-dns -h node -d example.com -n 10.4.59.y -k ${keyfile}
Now that you have added your node host to the DNS server, the broker application host should be able to resolve the node host by referring to it by name. Let’s test this:
dig @127.0.0.1 node.example.com
This should resolve to the 10.4.59.y IP address that you specified for the node host in the oo-register-node command.
9.2. Configure SSH Key Authentication
While on the broker application host, you need to copy the SSH key that you previously created over to the node. This will enable operations to work from inside of OpenShift Origin without requiring a password.
If you have not done so already, create a .ssh directory for the root user on the node host: $ mkdir -m 0700 -p /root/.ssh |
Once you connect to the broker host, copy the key with the following command:
Execute the following on the broker host:
scp /etc/openshift/rsync_id_rsa.pub [email protected]:/root/.ssh
cp -f /etc/openshift/rsync_id_rsa.pub /root/.ssh/
Once you enter that command, you will be prompted to authenticate to the node host.
At this point, you need to login to your node host to add the newly copied key to our authorized_keys. SSH into your node host and run the following:
Execute the following on the node host:
cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys
Now that your key has been copied from your broker application host to your node host, let’s verify that is copied correctly and was added to the authorized_keys file. Once you issue the following command, you should be authenticated to the node host without having to specify the root user password.
Verify the key by executing the following on the broker host:
ssh -i /root/.ssh/rsync_id_rsa [email protected]
ssh -i /root/.ssh/rsync_id_rsa [email protected]
9.3. Configure DNS Resolution on the Node
Now you need to configure the node host to use the BIND server that was installed and configured on the broker application host. This is a fairly straightforward process of adding the IP address of the DNS server to the /etc/resolv.conf on the node host.
You can skip this section if you are build a all-in-one environment. |
Edit this file and add the following line, making sure to use the correct IP address of your broker host:
Perform this change on the node host:
nameserver 10.4.59.x
9.4. Configure the DHCP Client and Hostname
You can skip this section if you are building an all-in-one environment.
On the node host, configure your system settings to prepend the DNS server to the resolv.conf file on system boot. This will allow the node host to resolve references to broker.example.com to ensure that all pieces of OpenShift Origin can communicate with one another. This process is similar to setting up the dhclient-eth0.conf configuration file for the broker application.
This step assumes that your node host is using the eth0 device for network connectivity. If that is not the case, replace eth0 with the correct Ethernet device for your host. |
Edit the /etc/dhcp/dhclient-eth0.conf file, or add it if it doesn’t exist, and add the following information ensuring that you replace the IP address with the correct IP of your broker application host:
prepend domain-name-servers 10.4.59.x; supersede host-name "node"; supersede domain-name "example.com";
Update your network scripts to use the DNS server. Update /etc/sysconfig/network-scripts/ifcfg-<eth device> file and add the following information making sure to substitute the IP address of the broker host:
PEERDNS="no" DNS1=10.4.59.x
Now set the hostname for node host to correctly reflect node.example.com.
Edit the /etc/sysconfig/network file and change the HOSTNAME entry to the following:
HOSTNAME=node.example.com
Finally, set the hostname for your current session by issuing the hostname command at the command prompt.
# hostname node.example.com
Verify that the hostname was set correctly by running the hostname
command. If the hostname was set correctly, you should see node.example.com as the output of the hostname command.
# hostname
9.5. MCollective on the Node Host
Server used:
-
node host
Tools used:
-
text editor
-
yum
-
chkconfig
-
service
-
mco ping
MCollective is the tool that OpenShift Origin uses to send and receive messages via the ActiveMQ messaging server. In order for the node host to send and receive messages with the broker application, you need to install and configure MCollective on the node host to communicate with the broker application.
9.5.1. Install MCollective
In order to install MCollective on the node host, you will need to install the openshift-origin-msg-node-mcollective package that is provided by the OpenShift Origin repository:
yum install -y openshift-origin-msg-node-mcollective
Depending on your connection and speed of your node host, this installation make take several minutes. |
9.6. Configure MCollective
Configure the MCollective server to communicate with the broker application service. In order to accomplish this, replace the contents of the MCollective server.cfg configuration file to point to your correct stomp host. Edit the /opt/rh/ruby193/root/etc/mcollective/server.cfg file and add the following information. If you used a different hostname for your broker application host, ensure that you provide the correct stomp host. You also need to ensure that you use the same username and password that you specified in your ActiveMQ configuration.
cat <<EOF > /opt/rh/ruby193/root/etc/mcollective/server.cfg main_collective = mcollective collectives = mcollective libdir = /opt/rh/ruby193/root/usr/libexec/mcollective logfile = /var/log/openshift/node/ruby193-mcollective.log loglevel = debug daemonize = 1 direct_addressing = 0 registerinterval = 30 # Plugins securityprovider = psk plugin.psk = unset connector = activemq plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host = broker.example.com plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = mcollective plugin.activemq.pool.1.password = marionette plugin.activemq.stomp_1_0_fallback = 0 plugin.activemq.heartbeat_interval = 30 plugin.activemq.max_hbread_fails = 2 plugin.activemq.max_hbrlck_fails = 2 # Facts factsource = yaml plugin.yaml = /opt/rh/ruby193/root/etc/mcollective/facts.yaml EOF
Update the plugin.activemq.pool.1.password password to match what you set up in the active configuration.
Now ensure that MCollective is set to start on boot and also start the service for our current session.
chkconfig ruby193-mcollective on service ruby193-mcollective start
At this point, MCollective on the node host should be able to communicate with the broker application host. You can verify this by running the mco ping command on the broker.example.com host.
oo-mco ping
If MCollective was installed and configured correctly, you should see node.example.com in the output from the previous command.
9.7. Node Host Packages
Server used:
-
node host
Tools used:
-
text editor
-
yum
-
lokkit
-
chkconfig
Just as we installed specific packages that provide the source code and functionality for the broker application to work correctly, the node host also has a set of packages that need to be installed to properly identify the host as a node that will contain application gears.
9.7.1. Install the Core Packages
The following packages are required for your node host to work correctly:
-
rubygem-openshift-origin-node
-
rubygem-passenger-native
-
openshift-origin-port-proxy
-
openshift-origin-node-util
Installing these packages can be performed in one yum install command.
yum install -y rubygem-openshift-origin-node \ rubygem-passenger-native \ openshift-origin-port-proxy \ openshift-origin-node-util \ rubygem-openshift-origin-container-selinux
Depending on your connection and speed of your node host, this installation make take several minutes. |
9.7.2. Install front-end plugins
Select the front-end plugins you would like your node host to run. These plugins provide various ways to route HTTP(S) and web-socket traffic to the gears running on the node.
apache-mod-rewrite |
Provides HTTP and HTTPS mappings for gears and is based on mod_rewrite. Conflicts with apache-vhost plugin. |
apache-vhost |
Provides HTTP and HTTPS mappings for gears and is based on apache vhosts. Conflicts with apache-mod-rewrite plugin. |
nodejs-websocket |
Provides web-socket mappings for gears. |
Setup apache
This step is only required if you are planning to install the apache-mod-required or apache-vhosts plugin.
yum install -y httpd
Set the server hostname
echo "ServerName broker.example.com" > /etc/httpd/conf.d/000001_openshift_origin_node_servername.conf
echo "ServerName node.example.com" > /etc/httpd/conf.d/000001_openshift_origin_node_servername.conf
apache-mod-rewrite Plugin
yum install -y rubygem-openshift-origin-frontend-apache-mod-rewrite
This plugin conflicts with the apache-vhost plugin. |
If setting up an all-in-one machine, create broker and console routes in the node re-write map.
cat <<EOF > /tmp/nodes.broker_routes.txt __default__ REDIRECT:/console __default__/console TOHTTPS:127.0.0.1:8118/console __default__/broker TOHTTPS:127.0.0.1:8080/broker EOF mkdir -p /etc/httpd/conf.d/openshift cat /etc/httpd/conf.d/openshift/nodes.txt /tmp/nodes.broker_routes.txt > /etc/httpd/conf.d/openshift/nodes.txt.new mv -f /etc/httpd/conf.d/openshift/nodes.txt.new /etc/httpd/conf.d/openshift/nodes.txt httxt2dbm -f DB -i /etc/httpd/conf.d/openshift/nodes.txt -o /etc/httpd/conf.d/openshift/nodes.db.new chown root:apache /etc/httpd/conf.d/openshift/nodes.txt /etc/httpd/conf.d/openshift/nodes.db.new chmod 750 /etc/httpd/conf.d/openshift/nodes.txt /etc/httpd/conf.d/openshift/nodes.db.new mv -f /etc/httpd/conf.d/openshift/nodes.db.new /etc/httpd/conf.d/openshift/nodes.db
Skip this section if you are setting up broker and node on separate machines. |
9.7.3. apache-vhost Plugin
yum install -y rubygem-openshift-origin-frontend-apache-vhost
This plugin conflicts with the apache-mod-rewrite plugin. |
9.7.4. nodejs-websocket Plugin
yum install -y openshift-origin-node-proxy rubygem-openshift-origin-frontend-nodejs-websocket facter
Start the node-proxy service
chkconfig openshift-node-web-proxy on service openshift-node-web-proxy start
9.7.5. Setup firewall rules for gear port proxy
iptables -N rhc-app-comm iptables -I INPUT 4 -m tcp -p tcp --dport 35531:65535 -m state --state NEW -j ACCEPT iptables -I INPUT 5 -j rhc-app-comm iptables -I OUTPUT 1 -j rhc-app-comm /sbin/service iptables save
9.7.6. Select and Install Built-In Cartridges to be Supported
Cartridges provide the functionality that a consumer of the PaaS can use to create specific application types, databases, or other functionality. OpenShift Origin provides a number of built-in cartridges as well as an extensive cartridge API that will allow you to create your own custom cartridge types for your specific deployment needs.
At the time of this writing, the following optional application cartridges are available for consumption on the node host.
-
openshift-origin-cartridge-python: Python cartridge
-
openshift-origin-cartridge-ruby: Ruby cartridge
-
openshift-origin-cartridge-nodejs: Provides Node.js
-
openshift-origin-cartridge-perl: Perl cartridge
-
openshift-origin-cartridge-php: Php cartridge
-
openshift-origin-cartridge-diy: DIY cartridge
-
openshift-origin-cartridge-jbossas: Provides JBossAS7 support
-
openshift-origin-cartridge-jenkins: Provides Jenkins-1.4 support
If you want to provide scalable PHP applications for your consumers, you would want to install the openshift-origin-cartridge-haproxy and the openshift-origin-cartridge-php cartridges.
For database and other system related functionality, OpenShift Origin provides the following:
-
openshift-origin-cartridge-cron: Embedded cron support for OpenShift
-
openshift-origin-cartridge-jenkins-client: Embedded jenkins client support for OpenShift
-
openshift-origin-cartridge-mongodb: Embedded MongoDB support for OpenShift
-
openshift-origin-cartridge-10gen-mms-agent: Embedded 10gen MMS agent for performance monitoring of MondoDB
-
openshift-origin-cartridge-postgresql: Provides embedded PostgreSQL support
-
openshift-origin-cartridge-mysql: Provides embedded mysql support
-
openshift-origin-cartridge-phpmyadmin: phpMyAdmin support for OpenShift
The only required cartridge is the openshift-origin-cartridge-cron package.
If you are installing a multi-node configuration, it is important to remember that each node host must have the same cartridges installed. |
Start by installing the cron package, which is required for all OpenShift Origin deployments.
yum install -y openshift-origin-cartridge-cron
If you are planning to install the openshift-origin-cartridge-jenkins* packages. You will first need to configure and install jenkins:
curl -o /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo rpm --import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key yum install -y jenkins-1.510 chkconfig jenkins off
The OpenShift Jenkins plugin currently requires Jenkins version 1.6 or higher. |
As an example, this additional command will install the cartridges needed for scalable PHP applications that can connect to MySQL:
yum install -y openshift-origin-cartridge-haproxy openshift-origin-cartridge-php openshift-origin-cartridge-mysql
For a complete list of all cartridges that you are entitled to install, you can perform a search using the yum command that will output all OpenShift Origin cartridges.
# yum search origin-cartridge
To install all cartridges RPMs run:
yum install -y openshift-origin-cartridge-\*
In order to complete the process of registering cartridges, you will need to perform some post-install tasks. This is because the final step requires you to run a command from a running Broker to poll the running Nodes. More on that in the Post-Install Tasks section.
9.8. Start Required Services
The node host will need to allow HTTP, HTTPS, and SSH traffic to flow through the firewall. We also want to ensure that the httpd, network, and sshd services are set to start on boot.
lokkit --service=ssh lokkit --service=https lokkit --service=http lokkit --port=8000:tcp lokkit --port=8443:tcp chkconfig httpd on chkconfig network on chkconfig sshd on chkconfig oddjobd on chkconfig openshift-node-web-proxy on
10. Configuring Multi-Tenancy on the Node Host
Server used:
-
node host
Tools used:
-
text editor
-
sed
-
restorecon
-
chkconfig
-
service
-
mount
-
quotacheck
-
augtool
This section describes how to configure the node host for multi-tenant gears.
It may be a little surprising that the parameters of a node profile (a.k.a "gear profile" or "gear size") are not actually defined centrally, but rather on each individual node host. The broker knows profiles only as labels (e.g. "small"); a node host must present a profile in order for the broker to place gears for that profile on it. By convention, we expect node hosts to specify resource constraints (on RAM, CPU, etc.) uniformly across the profile, but there is nothing to actually enforce that (other than good sense). It is also perfectly reasonable to partition nodes via multiple profiles with identical resource constraints but different names.
10.1. Install augeas tools
Augeas is a very useful toolset to perform scripted updates to configuration files. Run the following to install it:
yum install -y augeas
10.2. Configure PAM Modules
The pam_namespace PAM module sets up a private namespace for a session with polyinstantiated directories. A polyinstantiated directory provides a different instance of itself based on user name, or when using SELinux, user name, security context or both. OpenShift Origin ships with its own PAM configuration and we need to configure the node to use the configuration.
cat <<EOF | augtool set /files/etc/pam.d/sshd/#comment[.='pam_selinux.so close should be the first session rule'] 'pam_openshift.so close should be the first session rule' ins 01 before /files/etc/pam.d/sshd/*[argument='close'] set /files/etc/pam.d/sshd/01/type session set /files/etc/pam.d/sshd/01/control required set /files/etc/pam.d/sshd/01/module pam_openshift.so set /files/etc/pam.d/sshd/01/argument close set /files/etc/pam.d/sshd/01/#comment 'Managed by openshift_origin' set /files/etc/pam.d/sshd/#comment[.='pam_selinux.so open should only be followed by sessions to be executed in the user context'] 'pam_openshift.so open should only be followed by sessions to be executed in the user context' ins 02 before /files/etc/pam.d/sshd/*[argument='open'] set /files/etc/pam.d/sshd/02/type session set /files/etc/pam.d/sshd/02/control required set /files/etc/pam.d/sshd/02/module pam_openshift.so set /files/etc/pam.d/sshd/02/argument[1] open set /files/etc/pam.d/sshd/02/argument[2] env_params set /files/etc/pam.d/sshd/02/#comment 'Managed by openshift_origin' rm /files/etc/pam.d/sshd/*[module='pam_selinux.so'] set /files/etc/pam.d/sshd/03/type session set /files/etc/pam.d/sshd/03/control required set /files/etc/pam.d/sshd/03/module pam_namespace.so set /files/etc/pam.d/sshd/03/argument[1] no_unmount_on_close set /files/etc/pam.d/sshd/03/#comment 'Managed by openshift_origin' set /files/etc/pam.d/sshd/04/type session set /files/etc/pam.d/sshd/04/control optional set /files/etc/pam.d/sshd/04/module pam_cgroup.so set /files/etc/pam.d/sshd/04/#comment 'Managed by openshift_origin' set /files/etc/pam.d/runuser/01/type session set /files/etc/pam.d/runuser/01/control required set /files/etc/pam.d/runuser/01/module pam_namespace.so set /files/etc/pam.d/runuser/01/argument[1] no_unmount_on_close set /files/etc/pam.d/runuser/01/#comment 'Managed by openshift_origin' set /files/etc/pam.d/runuser-l/01/type session set /files/etc/pam.d/runuser-l/01/control required set /files/etc/pam.d/runuser-l/01/module pam_namespace.so set /files/etc/pam.d/runuser-l/01/argument[1] no_unmount_on_close set /files/etc/pam.d/runuser-l/01/#comment 'Managed by openshift_origin' set /files/etc/pam.d/su/01/type session set /files/etc/pam.d/su/01/control required set /files/etc/pam.d/su/01/module pam_namespace.so set /files/etc/pam.d/su/01/argument[1] no_unmount_on_close set /files/etc/pam.d/su/01/#comment 'Managed by openshift_origin' set /files/etc/pam.d/system-auth-ac/01/type session set /files/etc/pam.d/system-auth-ac/01/control required set /files/etc/pam.d/system-auth-ac/01/module pam_namespace.so set /files/etc/pam.d/system-auth-ac/01/argument[1] no_unmount_on_close set /files/etc/pam.d/system-auth-ac/01/#comment 'Managed by openshift_origin' save EOF
cat <<EOF > /etc/security/namespace.d/sandbox.conf # /sandbox \$HOME/.sandbox/ user:iscript=/usr/sbin/oo-namespace-init root,adm,apache EOF cat <<EOF > /etc/security/namespace.d/tmp.conf /tmp \$HOME/.tmp/ user:iscript=/usr/sbin/oo-namespace-init root,adm,apache EOF cat <<EOF > /etc/security/namespace.d/vartmp.conf /var/tmp \$HOME/.tmp/ user:iscript=/usr/sbin/oo-namespace-init root,adm,apache EOF
10.3. Enable Control Groups (cgroups)
Cgroups enable you to allocate resources—such as CPU time, system memory, network bandwidth, or combinations of these resources—among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system.
chkconfig cgconfig on chkconfig cgred on service cgconfig restart service cgred restart
In order for cgroups to work correctly, you need to ensure that services are started in the correct order.
10.4. Configure Disk Quotas
When a consumer of OpenShift Origin creates an application gear, you will need to be able to control and set the amount of disk space that the gear can consume. This configuration is located in the /etc/openshift/resource_limits.conf file. The two settings of interest are quota_files and quota_blocks. The quota_files setting specifies the total number of files that a gear / user is allowed to own. The quota_blocks is the actual amount of disk storage that the gear is allowed to consume — where 1 block is equal to 1024 bytes.
In order to enable usrquota on the filesystem, you will need to add the usrquota option in the /etc/fstab for the mount of /var/lib/openshift. In this chapter, the /var/lib/openshift directory is mounted as part of the root filesystem. The corresponding line in the /etc/fstab file looks like
/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1
In order to add the usrquota option to this mount point, change the entry to the following:
/dev/mapper/VolGroup-lv_root / ext4 defaults,usrquota 1 1
For the usrquota option to take effect, you can reboot the node host or simply remount the filesystem:
mount -o remount /
And then generate user quota info for the mount point:
quotacheck -cmug /
10.5. Configure SELinux and System Control Settings
Server used:
-
node host
Tools used:
-
text editor
-
setsebool
-
fixfiles
-
restorecon
-
sysctl
10.5.1. Configuring SELinux
The OpenShift Origin node requires several SELinux Boolean values to be set in order to operate correctly.
Variable Name | Description |
---|---|
httpd_run_stickshift |
Enable passenger-related permissions |
httpd_execmem |
Allow httpd to execute programs that require memory addresses that are both executable and writable |
httpd_unified |
Allow the broker to write files in the "http" file context |
httpd_can_network_connect |
Allow the broker application to access the network |
httpd_can_network_relay |
Allow the SSL termination Apache instance to access the backend Broker application |
httpd_read_user_content |
Allow the node to read application data |
httpd_enable_homedirs |
Allow the node to read application data |
allow_polyinstantiation |
Allow polyinstantiation for gear containment |
To set these values and then relabel files to the correct context, issue the following commands:
setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on \ httpd_read_user_content=on httpd_enable_homedirs=on httpd_run_stickshift=on \ allow_polyinstantiation=on httpd_run_stickshift=on httpd_execmem=on restorecon -rv /var/run restorecon -rv /var/lib/openshift /etc/openshift/node.conf /etc/httpd/conf.d/openshift
10.5.2. Configuring System Control Settings
You will need to modify the /etc/sysctl.conf configuration file to increase the number of kernel semaphores (to allow many httpd processes), increase the number ephemeral ports, and to also increase the connection tracking table size. Edit the file in your favorite text editor and add the following lines to the bottom of the file:
cat <<EOF | augtool set /files/etc/sysctl.conf/kernel.sem "250 32000 32 4096" set /files/etc/sysctl.conf/net.ipv4.ip_local_port_range "15000 35530" set /files/etc/sysctl.conf/net.netfilter.nf_conntrack_max "1048576" save EOF
Once the changes have been made, reload the configuration file.
sysctl -p /etc/sysctl.conf
You may see error messages about unknown keys. Check that these error messages did not result from typos in the settings you have added just now. If they result from settings that were already present in /etc/sysctl.conf, you can ignore them.
10.6. Configure SSH, OpenShift Port Proxy, and Node Configuration
Server used:
-
node host
Tools used:
-
text editor
-
perl
-
lokkit
-
chkconfig
-
service
-
openshift-facts
10.6.1. Configuring SSH to Pass Through the GIT_SSH Environment Variable
Edit the /etc/ssh/sshd_config file and add the following lines
cat <<EOF >> /etc/ssh/sshd_config AcceptEnv GIT_SSH EOF
When a developer pushes a change up to their OpenShift Origin gear, an SSH connection is created. Because this may result in a high number of connections, you need to increase the limit of the number of connections allowed to the node host.
cat <<EOF | augtool set /files/etc/ssh/sshd_config/MaxSessions 40 save EOF
10.7. Initialize Traffic Control
Configure traffic control to measure and control the amount of outgoing and incoming traffic used by each gear:
chkconfig openshift-tc on
10.7.1. Configuring the Port Proxy
Multiple application gears can and will reside on the same node host. In order for these applications to receive HTTP requests to the node, you need to configure a proxy that will pass traffic to the gear application that is listening for connections on the loopback address. To do this, you need to open up a range of ports that the node can accept traffic on as well as ensure the port-proxy is started on boot.
lokkit --port=35531-65535:tcp chkconfig openshift-port-proxy on service openshift-port-proxy start
If a node is restarted, you want to ensure that the gear applications are also restarted. OpenShift Origin provides a script to accomplish this task, but you need to configure the service to start on boot.
chkconfig openshift-gears on
10.7.2. Configuring Node Settings for Domain Name
Edit the /etc/openshift/node.conf file and specify the correct settings for your CLOUD_DOMAIN, PUBLIC_HOSTNAME, and BROKER_HOST IP address. For example:
PUBLIC_HOSTNAME="node.example.com" # The node host's public hostname PUBLIC_IP="10.4.59.y" # The node host's public IP address BROKER_HOST="broker.example.com" # IP or DNS name of broker host for REST API EXTERNAL_ETH_DEV='enp0s5' # Update to match name of external network device
PUBLIC_HOSTNAME="broker.example.com" # The node host's public hostname PUBLIC_IP="10.4.59.x" # The node host's public IP address BROKER_HOST="broker.example.com" # IP or DNS name of broker host for REST API EXTERNAL_ETH_DEV='enp0s5' # Update to match name of external network device
Ensure that EXTERNAL_ETH_DEV and PUBLIC_IP have accurate values or node will be unable to create gears |
10.8. Update login.defs
Update the minimum UID and GID for the machine to match GEAR_MIN_UID from node.conf. This value is 500 by default.
cat <<EOF | augtool set /files/etc/login.defs/UID_MIN 500 set /files/etc/login.defs/GID_MIN 500 save EOF
11. Testing the Configuration
If everything to this point has been completed successfully, you can now test your deployment of OpenShift Origin. To run a test, first setup an SSH tunnel to enable communication with the broker and node hosts. This will allow you to connect to localhost on your desktop machine and forward all traffic to your OpenShift Origin installation. In the next test, you will update your local machine to point directly to your DNS server, but for now, an SSH tunnel will suffice.
You can also just use the IP address of your broker node instead of using port forwarding. |
On your local machine, issue the following command, replacing the IP address with the IP address of your broker node:
sudo ssh -f -N -L 80:broker.example.com:80 -L 8161:broker.example.com:8161 -L 443:broker.example.com:443 [email protected]
We have to use the sudo command in order to allow forwarding of low range ports. Once, you have entered the above command, and authenticated correctly, you should be able to view the web console by pointing your local browser to:
http://127.0.0.1
You will notice that you may, depending on your browser settings, have to accept the SSL certificate. In Firefox, the page will look similar to this:
Once you have accepted and added the SSL certificate, you will prompted to authenticate to the OpenShift console. Use the credentials that we created in a previous chapter, which should be:
-
Username: demo
-
Password: demo
After you have authenticated, you should be presented with the OpenShift web console as shown below:
12. Post-Install Tasks
Now that you have a correctly configured installation, there are two important final steps to be done. These steps assume that your Broker and Node(s) are running and able to communicate with each other as described in this guide.
12.1. Add Nodes to Districts
As of OpenShift Origin v4, Nodes must belong to a district in order to work properly. Adding a Node to a district after the Node already has hosted apps running on it is very difficult, so this is a very important pre-deployment task. For a discussion of what districts are, refer to the Administration Guide.
Define one or more districts
From a Broker host, run the following command to define a new district:
$ oo-admin-ctl-district -c create -n <district_name> -p <district_gear_size>
Add one or more Nodes to the district(s)
To perform a blank assignment of all Nodes to a district, run:
$ oo-admin-ctl-district -c add-node -n <district_name> -a
Otherwise add Nodes one at a time with:
$ oo-admin-ctl-district -c add-node -n <district_name> -i <node_server_name>
12.2. Register Cartridges with the Broker
From a Broker host, run the following command to poll a Node for available cartridge information:
$ oo-admin-ctl-cartridge -c import-node --activate
This will automatically register the cartridges with the Broker and make them available to users for new hosted applications.
13. Configuring local machine for DNS resolution
Server used:
-
local machine
Tools used:
-
text editor
-
networking tools
At this point, you should have a complete and correctly functioning OpenShift Origin installation. During the next portion of the training, we will be focussing on administration and usage of the OpenShift Origin PaaS. To make performing these tasks easier, it is suggested that you add the DNS server that we created in lab 2 to be the first nameserver that your local machine uses to resolve hostnames. The process for this varies depending on the operating system. This lab manual will cover the configuration for both the Linux and Mac operating systems. If you are using a Microsoft Windows operating system, consult the instructor for instructions on how to perform this lab.
13.1. Configure example.com resolution for Linux
If you are using Linux, the process for updating your name server is straightforward. Simply edit the /etc/resolv.conf configuration file and add the IP address of your broker node as the first entry. For example, add the following at the top of the file, replacing the 10.4.59.x IP address with the correct address of your broker node:
nameserver 10.4.59.x
Once you have added the above nameserver, you should be able to communicate with your OpenShift Origin PaaS by using the server hostname. To test this out, ping the broker and node hosts from your local machine:
$ ping broker.example.com $ ping node.example.com
13.2. Configure example.com resolution for OS X
If you are using OSX, you will notice that the operating has a /etc/resolv.conf configuration file. However, the operating system does not respect this file and requires users to edit the DNS servers via the System Preferences tool.
Open up the System Preferences tool and select the Network utility:
On the bottom left hand corner of the Network utility, ensure that the lock button is unlocked to enable user modifications to the DNS configuration. Once you have unlocked the system for changes, locate the Ethernet device that is providing connectivity for your machine and click the advanced button:
Select the DNS tab at the top of the window:
Make a list of the current DNS servers that you have configured for your operating system. When you add a new one, OS X removes the existing servers forcing you to add them back. |
Click the + button to add a new DNS server and enter the 10.4.59.x IP address of your broker host.
Add your existing nameservers back that you made a note of above. |
After you have applied the changes, we can now test that name resolution is working correctly. To test this out, ping the broker and node hosts from your local machine:
$ ping broker.example.com $ ping node.example.com
14. Appendix: High Availability Deployments
There are a number of ways to deploy OpenShift components in High Availability configurations. The OpenShift Origin Puppet module, in concert with the oo-install utility, uses the approach described here. For more hands-on information, look to the documentation of these components for specific deployment and configuration steps.
14.1. Broker Load-Balancing
Broker clustering is accomplished in the Puppet module by using HAProxy and a virtual host / IP address as a front for an arbitrary number of Broker instances. One Broker host also serves as the HAProxy host, and calls to the "virtual Broker" are resolved here and directed via HAProxy to one of the real Brokers. Be aware, however, that the HAProxy instance in this scenario represents a single point of failure. As a failsafe, individual Broker instances can still be addressed using their own hostnames and IP addresses.
14.2. Datastore Replicasets
The backend MongoDB instance that the Broker uses to manage hosted applications can be replicated using MongoDB’s native support. One MongoDB service is identified as the master instance and the others must be registered to it as slave instances.
In the Broker config file, located on each Broker host at /etc/openshift/broker.conf
, the following two settings must be modified to make OpenShift aware of the replication:
MONGO_REPLICA_SETS=true (1) MONGO_HOST_PORT="db1.domain.com:27017,db2.domain.com:27017,db3.domain.com:27017" (2)
1 | Change this from false to true |
2 | Change this from a single host:port value to a comma-delimited list |
14.3. Message Server Failover
For message server redundancy, OpenShift makes use of ActiveMQ’s native clustering capability. This does not require any special configuration in OpenShift. Refer to ActiveMQ’s documentation for details on setting this up.