This guide will walk you through configuring puppet scripts to install and configure OpenShift Origin.
Configuring Puppet for a complex deployment?
OpenShift is, by its nature, a multi-host system. While Puppet works well on a host-by-host basis, synchronizing the deployment of OpenShift across even two hosts requires some extra work that Puppet alone cannot do. If you are interested in a multi-host deployment, be aware that oo-install
can help you. Using the oo-install
utility, you can either:
-
Have
oo-install
generate your per-host Puppet config files for you or -
Automatically perform the whole deployment
So even if you ultimately want to run Puppet by hand, consider using oo-install
to help you generate your puppet config files. You can learn more about it at install.openshift.com.
Supported Operating Systems OpenShift Origin M4, and this Puppet module by extension, are only supported on these Linux distributions:
If you need to use different or more current RPM-based Linux distributions, have a look at OpenShift 3. |
1. System Prerequisites
Before you can successfully run a puppet deployment, you will need to install the puppet
RPM, and possibly also the bind
RPM, on your target system. The PuppetLabs site has more information on installing puppet from the PuppetLabs repo, and bind
can be installed using the following command:
$ yum install bind -y
The bind installation is only necessary if you want the OpenShift host to act as the nameserver for OpenShift-hosted applications. |
1.1. "Optional" Repository for RHEL
If you are deploying to a Red Hat Enterprise Linux host, you will also need to enable the Optional repository for some necessary RPMs:
-
Via yum-config-manager:
yum-config-manager --enable rhel-6-server-optional-rpms
-
Via RHN classic: enable the
rhel-x86_64-server-optional-6
channel
1.2. Installing the OpenShift Origin Puppet Module
Once puppet has been installed on the target system, you can install the OpenShift Origin module by running this command:
$ puppet module install openshift/openshift_origin
If you would like to work from the puppet module source instead, you can clone the puppet-openshift_origin repository into the target system as follows:
$ git clone https://github.com/openshift/puppet-openshift_origin.git /etc/puppet/modules/openshift_origin
1.3. Generating a BIND TSIG Key
If you want OpenShift to manage DNS for hosted applications, you will need to generate a TSIG key for the OpenShift bind
instance. This key will be used to update DNS records in the BIND server that will be installed,
both for managing application DNS and (by default) for creating host DNS records:
#Using example.com as the cloud domain $ /usr/sbin/dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom -K /var/named example.com $ cat /var/named/Kexample.com.*.key | awk '{print $8}'
The TSIG key should look like CNk+wjszKi9da9nL/1gkMY7H+GuUng==
. We will use this in the following steps.
1.4. Update the Hostname
You may also want to change the hostname of the target system before you deploy; here’s how.
Choose a hostname and substitute it where needed for "broker.example.com" below. This sets the host’s name locally, not in DNS. For nodes, this is used as the server identity. Generally it is best to use a name that matches how the host will resolve in DNS.
$ cat<<EOF>/etc/sysconfig/network NETWORKING=yes HOSTNAME=broker.example.com EOF $ hostname broker.example.com
2. Puppet Configurations
With Puppet installed, now you can create a file for puppet’s installation parameters for this host. This file will define one class (openshift_origin
) that tells Puppet which OpenShift components to install and configure on the host. If you are new to Puppet, you can learn more about how this works in the PuppetLabs documentation.
For all of these examples, the indicated configuration is written to a file called configure_origin.pp
. The the puppet utility is run as follows:
puppet apply --verbose configure_origin.pp
The deployment process may take up to an hour. After it is completed, refer to the Post-Install Steps below for information on how to finish the OpenShift Origin setup.
2.1. All-in-One
This configuration will deploy the entire OpenShift system on a single host. This configuration includes an OpenShift-hosted DNS server that handles both the hosted application domain and a separate zone for the OpenShift host.
class { 'openshift_origin' : roles => ['msgserver','datastore','nameserver','broker','node'], # Hostname values (all identical) broker_hostname => 'broker.openshift.local', datastore_hostname => 'broker.openshift.local', msgserver_hostname => 'broker.openshift.local', nameserver_hostname => 'broker.openshift.local', node_hostname => 'broker.openshift.local', # IP address values (all identical) broker_ip_addr => '10.10.10.24', nameserver_ip_addr => '10.10.10.24', node_ip_addr => '10.10.10.24', conf_node_external_eth_dev => 'eth0', # RPM sources install_method => 'yum', repos_base => 'https://mirror.openshift.com/pub/origin-server/release/4/rhel-6', jenkins_repo_base => 'http://pkg.jenkins-ci.org/redhat', optional_repo => 'http://download.fedoraproject.org/pub/epel/6/$basearch', # OpenShift Config domain => 'example.com', openshift_user1 => 'demo', openshift_password1 => 'sMwNUIUqRkV9he1zRfFiAA', conf_valid_gear_sizes => 'small,medium,large', conf_default_gear_capabilities => 'small,medium', conf_default_gear_size => 'small', # Datastore config mongodb_port => 27017, mongodb_replicasets => false, mongodb_broker_user => 'openshift', mongodb_broker_password => '9Km0vPS9U9v0h5IowgCyw', mongodb_admin_user => 'admin', mongodb_admin_password => 'NnZqfvTetXoSqfEWaYNzw', # MsgServer config msgserver_cluster => false, mcollective_user => 'mcollective', mcollective_password => 'pv5bDYXFDkYSLRdI5ywQ', # DNS config dns_infrastructure_zone => 'openshift.local', dns_infrastructure_names => [{ hostname => 'broker.openshift.local', ipaddr => '10.10.10.24' }], dns_infrastructure_key => 'UjCNCJgnqJPx6dFaQcWVwDjpEAGQY4Sc2H/llwJ6Rt+0iN8CP0Bm5j5pZsvvhZq7mxx7/MdTBBMWJIA9/yLQYg==', bind_key => 'SgUfFVngIN3M2MfmYpfybJGr0VJ8ldBxY3/xtEQLwBSnJZjCmAeudf0cfmPVPSPYgV8657mDFDOg9KPIyyztzw==', }
2.2. Two Hosts: "Basic" Broker and Node
This configuration puts most of the OpenShift components on one host, but configures a second host as a dedicated node. This is a good template for a basic production-capable OpenShift deployment as you can add Node hosts as needed to increase capacity.
2.2.1. Broker Host Configuration
class { 'openshift_origin' : roles => ['msgserver','datastore','nameserver','broker'], # Hostname values broker_hostname => 'broker.openshift.local', datastore_hostname => 'broker.openshift.local', nameserver_hostname => 'broker.openshift.local', msgserver_hostname => 'broker.openshift.local', node_hostname => 'node.openshift.local', # IP address values broker_ip_addr => '10.10.10.24', nameserver_ip_addr => '10.10.10.24', node_ip_addr => '10.10.10.27', conf_node_external_eth_dev => 'eth0', # RPM Sources install_method => 'yum', repos_base => 'https://mirror.openshift.com/pub/origin-server/release/4/rhel-6', jenkins_repo_base => 'http://pkg.jenkins-ci.org/redhat', optional_repo => 'http://download.fedoraproject.org/pub/epel/6/$basearch', # OpenShift Config domain => 'example.com', conf_valid_gear_sizes => 'small,medium,large', conf_default_gear_capabilities => 'small,medium', conf_default_gear_size => 'small', openshift_user1 => 'demo', openshift_password1 => 'IZPmHrdxOgqjqB0TMNDGQ', # Datastore Config mongodb_port => 27017, mongodb_replicasets => false, mongodb_broker_user => 'openshift', mongodb_broker_password => 'brFZGRCiOlmAqrMbj0OYgg', mongodb_admin_user => 'admin', mongodb_admin_password => 'BbMsrtPxsmSi5SY1zerN5A', # MsgServer config msgserver_cluster => false, mcollective_user => 'mcollective', mcollective_password => 'eLMRLsAcytKAJmuYOPE6Q', # DNS Config dns_infrastructure_zone => 'openshift.local', dns_infrastructure_names => [ { hostname => 'broker.openshift.local', ipaddr => '10.10.10.24' }, { hostname => 'node.openshift.local', ipaddr => '10.10.10.27' } ], bind_key => 'yV9qIn/KuCqvnu7SNtRKU3oZQMMxF1ET/GjkXt5pf5JBcHSKY8tqRagiocCbUX56GOM/iuP//D0TteLc3f1N2g==', dns_infrastructure_key => 'UjCNCJgnqJPx6dFaQcWVwDjpEAGQY4Sc2H/llwJ6Rt+0iN8CP0Bm5j5pZsvvhZq7mxx7/MdTBBMWJIA9/yLQYg==', }
2.2.2. Node Host Configuration
class { 'openshift_origin' : roles => ['node'], # Hostname values broker_hostname => 'broker.openshift.local', datastore_hostname => 'broker.openshift.local', msgserver_hostname => 'broker.openshift.local', nameserver_hostname => 'broker.openshift.local', node_hostname => 'node.openshift.local', # IP Address values broker_ip_addr => '10.10.10.24', nameserver_ip_addr => '10.10.10.24', node_ip_addr => '10.10.10.27', conf_node_external_eth_dev => 'eth0', # RPM Sources install_method => 'yum', repos_base => 'https://mirror.openshift.com/pub/origin-server/release/4/rhel-6', jenkins_repo_base => 'http://pkg.jenkins-ci.org/redhat', optional_repo => 'http://download.fedoraproject.org/pub/epel/6/$basearch', # OpenShift Config domain => 'example.com', openshift_user1 => 'demo', openshift_password1 => 'IZPmHrdxOgqjqB0TMNDGQ', conf_valid_gear_sizes => 'small,medium,large', conf_default_gear_capabilities => 'small,medium', conf_default_gear_size => 'small', # Datastore config mongodb_port => 27017, mongodb_replicasets => false, mongodb_broker_user => 'openshift', mongodb_broker_password => 'brFZGRCiOlmAqrMbj0OYgg', mongodb_admin_user => 'admin', mongodb_admin_password => 'BbMsrtPxsmSi5SY1zerN5A', # MsgServer Config mcollective_user => 'mcollective', mcollective_password => 'eLMRLsAcytKAJmuYOPE6Q', # DNS Config bind_key => 'yV9qIn/KuCqvnu7SNtRKU3oZQMMxF1ET/GjkXt5pf5JBcHSKY8tqRagiocCbUX56GOM/iuP//D0TteLc3f1N2g==', dns_infrastructure_key => 'UjCNCJgnqJPx6dFaQcWVwDjpEAGQY4Sc2H/llwJ6Rt+0iN8CP0Bm5j5pZsvvhZq7mxx7/MdTBBMWJIA9/yLQYg==', }
2.3. Different plugins: Kerberos auth and DNS
This example uses Kerberos for user authentication, and a Kerberos keytab for making authenticated updates to a remote nameserver.
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # broker authenticates updates to BIND server with keytab broker_dns_plugin => 'named', named_ip_addr => '<BIND server IP address>', bind_krb_principal => $hostname, bind_krb_keytab => '/etc/dns.keytab' register_host_with_named => true, # authenticate OpenShift users with kerberos broker_auth_plugin => 'kerberos', broker_krb_keytab => '/etc/http.keytab', broker_krb_auth_realms => 'EXAMPLE.COM', broker_krb_service_name => $hostname, }
Please note:
-
The Broker needs to be enrolled in the KDC as a host,
host/node_fqdn
as well as a service,HTTP/node_fqdn
-
Keytab should be generated, is located on the Broker machine, and Apache should be able to access it (
chown apache <kerberos_keytab>
) -
Like the example config below:
-
set
broker_auth_plugin
to'kerberos'
-
set
broker_krb_keytab
andbind_krb_keytab
to the absolute file location of the keytab -
set
broker_krb_auth_realms
to the kerberos realm that the Broker host is enrolled with -
set
broker_krb_service_name
to the FQDN of the enrolled kerberos service, e.g.$hostname
-
-
After setup, to test:
-
authentication:
kinit <user>
thencurl -Ik --negotiate -u : <node_fqdn>
-
GSS-TSIG (should return
nil
):
-
Use the Rails console on the broker to access the DNS plugin and test that it creates application records.
# cd /var/www/openshift/broker # scl enable ruby193 bash # (needed for Enterprise Linux only) # bundle --local # rails console # d = OpenShift::DnsService.instance # d.register_application "appname", "namespace", "node_fqdn" => nil
For any errors, on the Broker, check /var/log/openshift/broker/httpd/error_log
.
2.4. High Availability Deployments
The broker, msgserver and datastore roles can be deployed in high availability (HA) configurations.
2.4.1. HA Broker
Broker clustering is accomplished by using HAProxy and a virtual host / IP address as a front for an arbitrary number of Broker instances. This means that you will assign an additional IP address to OpenShift to serve as the virtual IP address and that all OpenShift hosts will use this virtual broker hostname and IP address to communicate with the Broker cluster.
In addition to selecting a virtual broker hostname and IP address, you must elect a Broker host to additionally serve the "load_balancer" role. This Broker will host the HAProxy service that front-ends the cluster.
Here are the variations that you will need to make to your basic configurations:
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # Use the *virtual* broker info for these values on hosts that are not Brokers broker_hostname => 'virtbroker.openshift.local', broker_ip_addr => '10.10.20.250', ... }
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # One and only one of the brokers must include the load_balancer role roles => [...,'load_balancer'], # Use the actual target host info for these values on hosts that are Brokers broker_hostname => <target_hostname>, broker_ip_addr => <target_ip_addr>, # Provide the cluster info broker_cluster_members => ['broker1.openshift.local','broker2.openshift.local','broker3.openshift.local'], broker_cluster_ip_addresses => ['10.10.20.24','10.10.20.25','10.10.20.26'], broker_virtual_hostname => 'virtbroker.openshift.local', broker_virtual_ip_address => '10.10.20.250', # Indicate if this Broker is also the load balancer; if this host includes # the load_balancer role, then set this to 'true' load_balancer_master => true|false, ... }
…And if OpenShift is also handling DNS your you, use this info one the host where you are deploying the nameserver role:
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # Use the *virtual* broker info for these values broker_virtual_hostname => 'virtbroker.openshift.local', broker_virtual_ip_addr => '10.10.20.250', # Additionally if you are using this nameserver to serve the domain for # OpenShift host systems, include the virtual host info in the infrastructure # list: dns_infrastructure_names => [ ... { hostname => 'virtbroker.openshift.local', ipaddr => '10.10.10.250' }, ], ... }
2.4.2. HA Datastore
The OpenShift Origin puppet module will configure multiple datastore instances into a MongoDB replica set.
Hosts that will include the Broker role should have this additional information:
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # Include the MongoDB replica set information for Brokers mongodb_replicasets => true, mongodb_replicasets_members => ['10.10.20.30:27071','10.10.20.31:27071','10.10.20.32:27071'], }
Hosts that include the datastore role should have this information:
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # Set the datastore_hostname value to the current datastore host's name datastore_hostname => <this_datastore_hostname>, # Include the MongoDB replica set information mongodb_replicasets => true, mongodb_replicasets_members => ['10.10.20.30:27071','10.10.20.31:27071','10.10.20.32:27071'], # One and only of the datastore hosts will be the primary mongodb_replica_primary => true|false, # All datastore hosts will know the primary's IP address and a # common replica key value mongodb_replica_primary_ip_addr = <primary_datastore_ip_addr>, mongodb_key = <replica_key_value>, }
2.4.3. HA MsgServer
For message server redundancy, OpenShift makes use of ActiveMQ’s native clustering capability.
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # Set the msgserver_hostname to the current msgserver host msgserver_hostname => <this_msgserver_hostname>, # Set the shared password that the cluster members will use msgserver_password => <shared_cluster_password>, # Specify the hostnames of all of the cluster members. msgserver_cluster => true, msgserver_cluster_members => ['msgserver1.openshift.local','msgserver2.openshift.local','msgserver3.openshift.local'], }
Brokers and Nodes need some of this information even they arent’s MsgServer hosts as well:
class { 'openshift_origin' : # Other settings as appropriate per above examples ... # Specify the hostnames of the msgserver cluster members. msgserver_cluster => true, msgserver_cluster_members => ['msgserver1.openshift.local','msgserver2.openshift.local','msgserver3.openshift.local'], }
3. Manual Tasks
This script attempts to automate as many tasks as it reasonably can. Unfortunately, it is constrained to setting up only a single host at a time. In an assumed multi-host setup, you will need to do the following after the script has completed.
-
Set up DNS entries for hosts.
If you installed BIND with the script, then any other components installed with the script on the same host received DNS entries. Other hosts must all be defined manually, including at least your node hosts. oo-register-dns may prove useful for this. -
Copy public rsync key to enable moving gears.
The broker rsync public key needs to go on nodes, but there is no good way to script that generically. Nodes should not have password-less access to brokers to copy the .pub key, so this must be performed manually on each node host:# scp root@broker:/etc/openshift/rsync_id_rsa.pub /root/.ssh/ (above step will ask for the root password of the broker machine) # cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys # rm /root/.ssh/rsync_id_rsa.pub
If you skip this, each gear move will require typing root passwords for each of the node hosts involved.
-
Copy ssh host keys between the node hosts.
All node hosts should identify with the same host keys, so that when gears are moved between hosts, ssh and git don’t give developers spurious warnings about the host keys changing. So, copy /etc/ssh/ssh_* from one node host to all the rest (or, if using the same image for all hosts, just keep the keys from the image). -
Perform the Post-Install tasks.
The Comprehensive Deployment guide includes information on the steps needed to complete a new OpenShift Origin deployment. These steps include creating districts and registering gear types, and they are necessary for the proper operation of the system.
4. Puppet Parameters
An exhaustive list of the parameters you can specify with puppet configuration follows.
Passwords used to secure various services. You are advised to specify only alphanumeric values in this script as others may cause syntax errors depending on context. If non-alphanumeric values are required, update them separately after installation. |
4.1. roles
Choose from the following roles to be configured on this node.
-
broker - Installs the broker and console.
-
node - Installs the node and cartridges.
-
msgserver - Installs ActiveMQ message broker.
-
datastore - Installs MongoDB (not sharded/replicated)
-
nameserver - Installs a BIND dns server configured with a TSIG key for updates.
-
load_balancer - Installs HAProxy and Keepalived for Broker API high-availability.
Default: [broker,node,msgserver,datastore,nameserver]
Multiple servers are required when using the load_balancer role. |
4.2. install_method
Choose from the following ways to provide packages:
-
none - install sources are already set up when the script executes (default)
-
yum - set up yum repos manually
-
repos_base
-
os_repo
-
os_updates_repo
-
jboss_repo_base
-
jenkins_repo_base
-
optional_repo
-
Default: yum
4.3. parallel_deployment
This flag is used to control some module behaviors when an outside utility (like oo-install) is managing the deployment of OpenShift across multiple hosts simultaneously. Some configuration tasks can"t be performed during a multi-host parallel installation and this Boolean enables the user to indicate whether or not those tasks should be attempted.
Default: false
4.4. repos_base
Base path to repository for OpenShift Origin
Nightlies: https://mirror.openshift.com/pub/origin-server/nightly/rhel-6
Release (currently v4): https://mirror.openshift.com/pub/origin-server/release/4/rhel-6
Default: Nightlies
4.5. architecture
CPU Architecture to use for the definition OpenShift Origin yum repositories
Default: $::architecture fact
Currently only the x86_64 architecture is supported.
|
4.6. override_install_repo
Repository path override. Uses dependencies from repos_base but uses override_install_repo path for OpenShift RPMs. Used when doing local builds.
Default: none
4.7. os_repo
The URL for a Fedora 19/RHEL 6 yum repository used with the "yum" install method. Should end in x86_64/os/.
Default: no change
4.8. os_updates_repo
The URL for a Fedora 19/RHEL 6 yum updates repository used with the "yum" install method. Should end in x86_64/.
Default: no change
4.9. jboss_repo_base
The URL for a JBoss repositories used with the "yum" install method. Does not install repository if not specified.
4.10. jenkins_repo_base
The URL for a Jenkins repositories used with the "yum" install method. Does not install repository if not specified.
4.11. optional_repo
The URL for a EPEL or optional repositories used with the "yum" install method. Does not install repository if not specified.
4.17. datastore_hostname
Default: the root plus the domain, e.g. broker.example.com - except nameserver=ns1.example.com
These supply the FQDN of the hosts containing these components. Used for configuring the host’s name at install, and also for configuring the broker application to reach the services needed.
if installing a nameserver, the script will create DNS entries for the hostnames of the other components being installed on this host as well. If you are using a nameserver set up separately, you are responsible for all necessary DNS entries. |
4.18. datastore1_ip_addr|datastore2_ip_addr|datastore3_ip_addr
Default: undef
IP addresses of the first 3 MongoDB servers in a replica set. Add datastoreX_ip_addr parameters for larger clusters.
4.19. nameserver_ip_addr
IP of a nameserver instance or current IP if installing on this node. This is used by every node to configure its primary name server.
Default: the current IP (at install)
4.20. bind_key
When the nameserver is remote, use this to specify the key for updates. This is the "Key:" field from the .private key file generated by dnssec-keygen. This field is required on all nodes.
4.21. bind_key_algorithm
When using a BIND key, use this algorithm for the BIND key.
Default: HMAC-MD5
4.22. bind_krb_keytab
When the nameserver is remote, Kerberos keytab together with principal can be used instead of the dnssec key for updates.
4.23. bind_krb_principal
When the nameserver is remote, this Kerberos principal together with Kerberos keytab can be used instead of the dnssec key for updates.
Example: DNS/[email protected]
4.24. aws_access_key_id
This and the next value are Amazon AWS security credentials. The aws_access_key_id is a string which identifies an access credential.
4.25. aws_secret_key
This is the secret portion of AWS Access Credentials indicated by the aws_access_key_id
4.26. aws_zone_id
This is the ID string for an AWS Hosted zone which will contain the OpenShift application records.
4.27. conf_nameserver_upstream_dns
List of upstream DNS servers to use when installing a nameserver on this node.
Default: [8.8.8.8]
4.28. broker_ip_addr
This is used for the node to record its broker. Also is the default for the nameserver IP if none is given.
Default: the current IP (at install)
4.29. broker_cluster_members
An array of broker hostnames that will be load-balanced for high-availability.
Default: undef
4.30. broker_cluster_ip_addresses
An array of Broker IP addresses within the load-balanced cluster.
Default: undef
4.31. broker_virtual_ip_address
The virtual IP address that will front-end the Broker cluster.
Default: undef
4.32. broker_virtual_hostname
The hostname that represents the Broker API cluster. This name is associated to broker_virtual_ip_address and added to Named for DNS resolution.
Default: "broker.${domain}"
4.33. load_balancer_master
Sets the state of the load-balancer. Valid options are true or false. true sets the load-balancer as the active listener for the Broker cluster Virtual IP address. Only 1 load_balancer_master is allowed within a Broker cluster.
Default: false
4.34. load_balancer_auth_password
The password used to secure communication between the load-balancers within a Broker cluster.
Default: changeme
4.35. node_ip_addr
This is used for the node to give a public IP, if different from the one on its NIC.
Default: the current IP (at install)
4.36. Node Resource Limits
The following resource limits must be the same with a given district. |
4.36.3. node_quota_blocks
The max storage capacity allowed in each gear (1 block = 1024 bytes)
Default: 1048576
4.36.4. node_max_active_gears
max_active_gears is used for limiting/guiding gear placement. For no over-commit, should be (Total System Memory - 1G) / memory_limit_in_bytes
Default: 100
4.36.5. node_no_overcommit_active
no_overcommit_active enforces max_active_gears in a more stringent manner than normal, however it also adds overhead to gear creation, so should only be set to true when needed, like in the case of enforcing single tenancy on a node.
Default: false
4.36.7. node_tc_max_bandwidth
mbit/sec - Total bandwidth allowed for OpenShift
If TRAFFIC_CONTROL_DEVS
includes the loopback interface (lo
), this setting will
be ignored only for that interface. For example, with a setting of
TRAFFIC_CONTROL_DEVS="eth0 lo"
, the node_tc_max_bandwidth
setting would apply to
eth0
but not to lo
.
Default: 800
4.37. configure_ntp
Enabling this configures NTP. It is important that the time be synchronized across hosts because MCollective messages have a TTL of 60 seconds and may be dropped if the clocks are too far out of synch. However, NTP is not necessary if the clock will be kept in synch by some other means.
Default: true
4.38. ntp_servers
If configure_ntp is set to true (default), ntp_servers allows users to specify an array of NTP servers used for clock synchronization.
Default: [time.apple.com iburst, pool.ntp.org iburst, clock.redhat.com iburst]
Use iburst after every ntp server definition to speed up the initial synchronization. |
4.39. msgserver_cluster
Set to true to cluster ActiveMQ for high-availability and scalability of OpenShift message queues.
Default: false
4.40. msgserver_cluster_members
An array of ActiveMQ server hostnames. Required when parameter msgserver_cluster is set to true.
Default: undef
4.41. mcollective_cluster_members
An array of ActiveMQ server hostnames. Required when parameter msgserver_cluster is set to true.
Default: $msgserver_cluster_members
4.42. msgserver_password
Password used by ActiveMQ’s amquser. The amquser is used to authenticate ActiveMQ inter-cluster communication. Only used when msgserver_cluster is true.
Default "changeme"
4.43. msgserver_admin_password
This is the admin password for the ActiveMQ admin console, which is not needed by OpenShift but might be useful in troubleshooting.
Default: scrambled
4.45. mcollective_password
This is the user and password shared between broker and node for communicating over the mcollective topic channels in ActiveMQ. Must be the same on all broker and node hosts.
Default: mcollective/marionette
4.47. mongodb_admin_password
These are the username and password of the administrative user that will be created in the MongoDB datastore. These credentials are not used by in this script or by OpenShift, but an administrative user must be added to MongoDB in order for it to enforce authentication.
Default: admin/mongopass
The administrative user will not be created if CONF_NO_DATASTORE_AUTH_FOR_LOCALHOST is enabled. |
4.49. mongodb_broker_password
These are the username and password of the normal user that will be created for the broker to connect to the MongoDB datastore. The broker application’s MongoDB plugin is also configured with these values.
Default: openshift/mongopass
4.50. mongodb_name
This is the name of the database in MongoDB in which the broker will store data.
Default: openshift_broker
4.52. mongodb_replicasets
Enable/disable MongoDB replica sets for database high-availability.
Default: false
4.53. mongodb_replica_name
The MongoDB replica set name when $mongodb_replicasets is true.
Default: openshift
4.54. mongodb_replica_primary
Set the host as the primary with true or secondary with false. Must be set on one and only one host within the mongodb_replicasets_members array.
Default: undef
4.55. mongodb_replica_primary_ip_addr
The IP address of the Primary host within the MongoDB replica set.
Default: undef
4.56. mongodb_replicasets_members
An array of [host:port] of replica set hosts. Example: [10.10.10.10:27017, 10.10.10.11:27017, 10.10.10.12:27017]
Default: undef
4.57. mongodb_keyfile
The file containing the $mongodb_key used to authenticate MongoDB replica set members.
Default: /etc/mongodb.keyfile
4.58. mongodb_key
The key used by members of a MongoDB replica set to authenticate one another.
Default: changeme
4.60. openshift_password1
This user and password are entered in the /etc/openshift/htpasswd file as a demo/test user. You will likely want to remove it after installation (or just use a different auth method).
Default: demo/changeme
4.62. conf_broker_auth_private_key
Salt and private keys used when generating secure authentication tokens for Application to Broker communication. Requests like scale up/down and jenkins builds use these authentication tokens. This value must be the same on all broker nodes.
Default: Self signed keys are generated. Will not work with multi-broker setup.
4.63. conf_console_product_logo
Relative path to product logo URL
Default: if ose_version == undef /assets/logo-origin.svg if ose_version != undef /assets/logo-enterprise-horizontal.svg
4.64. conf_console_product_title
OpenShift Instance Name
Default: if ose_version == undef OpenShift Origin if ose_version != undef OpenShift Enterprise
4.65. conf_broker_multi_haproxy_per_node
This setting is applied on a per-scalable-application basis. When set to true, OpenShift will allow multiple instances of the HAProxy gear for a given scalable app to be established on the same node. Otherwise, on a per-scalable-application basis, a maximum of one HAProxy gear can be created for every node in the deployment (this is the default behavior, which protects scalable apps from single points of failure at the Node level).
Default: false
4.67. conf_console_session_secret
Session secrets used to encode cookies used by console and broker. This value must be the same on all broker nodes.
Default: undef
4.68. conf_valid_gear_sizes
List of all gear sizes this will be used in this OpenShift installation.
Default: [small]
4.70. conf_default_gear_capabilities
List of all gear sizes that newly created users will be able to create.
Default: [small]
4.73. broker_dns_plugin
DNS plugin used by the broker to register application DNS entries. Options:
-
nsupdate - nsupdate based plugin. Supports TSIG and GSS-TSIG based authentication. Uses bind_key for TSIG and bind_krb_keytab, bind_krb_principal for GSS_TSIG auth.
-
avahi - sets up a MDNS based DNS resolution. Works only for all-in-one installations.
-
route53 - use AWS Route53 for dynamic DNS service. Requires AWS key ID and secret and a delegated zone ID
Default: nsupdate
4.74. broker_auth_plugin
Authentication setup for users of the OpenShift service. Options:
-
mongo - Stores username and password in mongo.
-
kerberos - Kerberos based authentication. Uses broker_krb_service_name, broker_krb_auth_realms, broker_krb_keytab values.
-
htpasswd - Stores username/password in a htaccess file.
-
ldap - LDAP based authentication. Uses broker_ldap_uri.
Default: htpasswd
4.77. broker_krb_keytab
The Krb5KeyTab value of mod_auth_kerb is not configurable — the keytab is expected in /var/www/openshift/broker/httpd/conf.d/http.keytab
4.78. broker_ldap_uri
URI to the LDAP server (e.g. ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)). Set <code>broker_auth_plugin</code> to <code>ldap</code> to enable this feature.
4.79. broker_ldap_bind_dn
LDAP DN (Distinguished name) of user to bind to the directory with. (e.g. cn=administrator,cn=Users,dc=domain,dc=com) Default is anonymous bind.
4.80. broker_ldap_bind_password
Password of bind user set in broker_ldap_bind_dn. Default is anonymous bind with a blank password.
4.81. node_shmmax
kernel.shmmax sysctl setting for /etc/sysctl.conf
This setting should work for most deployments but if this is desired to be tuned higher, the general recommendations are as follows:
shmmax = shmall * PAGE_SIZE - PAGE_SIZE = getconf PAGE_SIZE - shmall = cat /proc/sys/kernel/shmall
shmmax is not recommended to be a value higher than 80% of total available RAM on the system (expressed in BYTES).
Default: kernel.shmmax = 68719476736
4.82. node_shmall
kernel.shmall sysctl setting for /etc/sysctl.conf, this defaults to 2097152 BYTES
This parameter sets the total amount of shared memory pages that can be used system wide. Hence, SHMALL should always be at least ceil(shmmax/PAGE_SIZE).
Default: kernel.shmall = 4294967296
4.83. node_container_plugin
Specify the container type to use on the node. * selinux - This is the default OpenShift Origin container type. At this time there are no other supported plugins.
Default: selinux
4.84. node_frontend_plugins
Specify one or more plugins to use register HTTP and web-socket connections for applications. Options:
-
apache-mod-rewrite - Mod-Rewrite based plugin for HTTP and HTTPS requests. Well suited for installations with a lot of creates/deletes/scale actions. Deprecated in OSE 2.2.
-
apache-vhost - VHost based plugin for HTTP and HTTPS. Suited for installations with less app create/delete activity. Easier to customize. If apache-mod-rewrite is also selected, apache-vhost will be ignored
-
nodejs-websocket - Web-socket proxy listening on ports 8000/8444
-
haproxy-sni-proxy - TLS proxy using SNI routing on ports 2303 through 2308 requires /usr/sbin/haproxy15 (haproxy-1.5-dev19 or later).
Default: [apache-vhost,nodejs-websocket]
4.85. node_unmanaged_users
List of user names who have UIDs in the range of OpenShift gears but must be excluded from OpenShift gear setups.
Default: []
4.86. conf_node_external_eth_dev
External facing network device. Used for routing and traffic control setup.
Default: eth0
4.88. conf_node_private_key
Public and private keys used for gears on the default domain. Both values must be defined or default self signed keys will be generated.
Default: Self signed keys are generated.
4.91. conf_node_watchman_gearretries
Number of restarts to attempt before waiting RETRY_PERIOD
Default: 3
4.92. conf_node_watchman_retrydelay
Number of seconds to wait before accepting another gear restart
Default: 300
4.93. conf_node_watchman_retryperiod
Number of seconds to wait before resetting retries
Default: 28800
4.94. conf_node_watchman_statechangedelay
Number of seconds a gear must remain inconsistent with it’s state before Watchman attempts to reset state
Default: 900
4.95. conf_node_watchman_statecheckperiod
Wait at least this number of seconds since last check before checking gear state on the Node. Use this to reduce Watchman’s GearStatePlugin’s impact on the system.
Default: 0
4.96. conf_node_custom_motd
Define a custom MOTD to be displayed to users who connect to their gears directly. If undef, uses the default MOTD included with the node package.
Default: undef
4.98. install_login_shell
Install a Getty shell which displays DNS, IP and login information. Used for all-in-one VM installation.
4.99. register_host_with_nameserver
Setup DNS entries for this host in a locally installed bind DNS instance.
Default: false
4.100. dns_infrastructure_zone
The name of a zone to create which will contain OpenShift infrastructure. If this is unset then no infrastructure zone or other artifacts will be created.
Default: ""
4.101. dns_infrastructure_key
A dnssec symmetric key which will grant update access to the infrastructure zone resource records.
This is ignored unless dns_infrastructure_zone is set.
Default: ""
4.102. dns_infrastructure_key_algorithm
When using a BIND key, use this algorithm for the infrastructure BIND key.
This is ignored unless dns_infrastructure_zone is set.
Default: HMAC-MD5
4.103. dns_infrastructure_names
An array of hashes containing hostname and IP Address pairs to populate the infrastructure zone.
This value is ignored unless dns_infrastructure_zone is set.
Hostnames can be simple names or fully qualified domain name (FQDN).
Simple names will be placed in the dns_infrastructure_zone. Matching FQDNs will be placed in the _dns_infrastructure_zone. Hostnames anchored with a dot (.) will be added verbatim.
Default: []
$dns_infrastructure_names = [ {hostname => "10.0.0.1", ipaddr => "broker1"}, {hostname => "10.0.0.2", ipaddr => "data1"}, {hostname => "10.0.0.3", ipaddr => "message1"}, {hostname => "10.0.0.11", ipaddr => "node1"}, {hostname => "10.0.0.12", ipaddr => "node2"}, {hostname => "10.0.0.13", ipaddr => "node3"}, ]
4.105. install_cartridges
List of cartridges to be installed on the node. Options:
-
10gen-mms-agent not available in OpenShift Enterprise
-
cron
-
diy
-
haproxy
-
mongodb
-
nodejs
-
perl
-
php
-
phpmyadmin not available in OpenShift Enterprise
-
postgresql
-
python
-
ruby
-
jenkins
-
jenkins-client
-
mariadb for Fedora Deployments
-
mysql for CentOS / RHEL deployments
-
jbosseap requires OpenShift Enterprise JBoss EAP add-on
-
jbossas not available in OpenShift Enterprise
-
jbossews
Default: [10gen-mms-agent,cron,diy,haproxy,mongodb, nodejs,perl,php,phpmyadmin,postgresql, python,ruby,jenkins,jenkins-client,mysql] OSE Default : [cron,diy,haproxy,mongodb,nodejs,perl, php,postgresql,python,ruby,jenkins, jenkins-client,mysql],
Default in OpenShift Enterprise: [cron,diy,haproxy,mongodb,nodejs,perl,php, postgresql,python,ruby,jenkins,jenkins-client, jbossews,mysql],
4.105.1. install_cartridges_recommended_deps
List of cartridge recommended dependencies to be installed on the node. Options:
-
all not available in OpenShift Enterprise
-
diy not available in OpenShift Enterprise
-
jbossas not available in OpenShift Enterprise
-
jbosseap requires OpenShift Enterprise JBoss EAP add-ons
-
jbossews
-
nodejs
-
perl
-
php
-
python
-
ruby
Default: [all] Default in OpenShift Enterprise : [jbossews,nodejs,perl,php,python,ruby]
4.105.2. install_cartridges_optional_deps
List of cartridge optional dependencies to be installed on the node. Options:
-
all not available in OpenShift Enterprise
-
diy not available in OpenShift Enterprise
-
jbossas not available in OpenShift Enterprise
-
jbosseap requires OpenShift Enterprise JBoss EAP add-ons
-
jbossews
-
nodejs
-
perl
-
php
-
python
-
ruby
Default: undef
4.106. update_network_conf_files
Indicate whether or not this module will configure resolv.conf and network for you.
Default: true