Cluster Setup

This process begins with an installed micro cloud, which must then be cloned across several nodes. You connect to each node in turn and tell it which roles it is to serve, thereby distributing the processing load for maximum performance.

Roles

A Stackato node can take on one or more of the following roles:

You can see a list of the available roles at the command line by running the kato info command.

Assigning Roles with Kato

The command line tool used to configure Stackato servers is called kato. Setup of cluster nodes is done primarily using the kato node setup, add, attach, and remove sub-commands.

The kato info command will show:

  • assigned roles: roles currently configured to run on the node
  • available roles: roles which can be added with kato role add

Preparing the Core Node

In a Stackato cluster, one node is dedicated as the Core node. This node will have a controller, primary, base, and router role but can also include additional roles.

Choose one VM running the default Stackato micro cloud as the Core node. Set up the Core node first. Then you can proceed to add other node types.

Note

A static IP address is necessary to provide a consistent network interface for other nodes to connect to. A real DNS record is also recommended for production use. You must set these values and reboot the VM before proceeding.

CORE_IP

First, take note of the IP address of the Core node. It will be required when configuring additional nodes in the following steps, so that they can attach to the Core node. Make sure that the IP address of its eth0 interface is registering the correct address, which may not be the case if you have set a static IP and not yet rebooted or restarted networking. To check the IP address, run:

$ ifconfig eth0

To set the static IP address, run:

$ kato op static_ip

Note

If the IP address of the Core node changes, the kato node migrate command must be run on all nodes in the cluster (starting with the Core node) to set the new CORE_IP.

HOSTNAME

Next, set the fully qualified hostname of the Core node. This is required so that Stackato's internal configuration matches the DNS record created for this system.

To set the hostname, run:

$ kato node rename hostname.example.com

Core Node

On the Core node, execute the following command:

$ kato node setup core api.hostname.example.com

This sets up the Core node with just the implicit controller, primary, and router roles.

If you intend to set up the rest of the cluster immediately, you would carry on to enable those roles you ultimately intend to run on the Core node. For example, to set up a Core node with the controller, primary router, and stager roles:

$ kato node setup core api.hostname.example.com
$ kato role add stager

Then proceed to configure the other VMs by attaching them to the Core node and assigning their particular roles.

Router Nodes

In smaller clusters, the Router role can be run on the Core Node. To run its own on a separate node:

$ kato node attach -e router CORE_IP

Note that the public DNS entry for the Stackato cluster's API endpoint must resolve to the Router if it is separate from the Core Node. For clusters requiring multiple Routers, see the Load Balancer and Multiple Routers section below.

Data Services Nodes

Data services can share a single node (small clusters) or run on separate nodes (recommended for production clusters). To set up all available data services on a single node and attach it to the Core node, run the following command on the data services node:

$ kato node attach -e data-services CORE_IP

Note

The Harbor port service needs a publicly routable IP and exposed port range if you want to provide externally accessible TCP and UDP ports for user applications. See the Harbor Requirements & Setup documentation for details.

Stager Nodes

In smaller clusters, a single Stager role running on the Core node (or a DEA node) should be sufficient.

On systems where there are likely to be several applications staged concurrently, multiple stagers can be configured on separate nodes (running just the Stager role or sharing a node with the DEA role). Stackato will distribute staging tasks between the Stagers.

To turn a generic Stackato VM into a Stager and connect it to the Core node:

$ kato node attach -e stager CORE_IP

DEA Nodes

An application "worker" node is called a Droplet Execution Agent (DEA). Once the controller node is running, you can begin to add some of these nodes with the kato node attach command. To turn a generic Stackato VM into a DEA and connect it to the Core node:

$ kato node attach -e dea CORE_IP

Continue this process until you have added all the desired DEA nodes.

Verification

To verify that all the cluster nodes are configured as expected, run the following command on the Core node:

$ kato status --all

Role Configuration using the Management Console

Once cluster nodes are connected to the Core node, roles can be enabled or disabled using the Cluster Admin interface in the Management Console.

One-Node Cluster Example

This is a trivial case which you would not deploy in production, but it helps to illustrate the role architecture in Stackato, and can be useful diagnostically.

Technically, it is a cluster, even though it consists of a single node. A node in this configuration will function much like a micro cloud and yet integrate as a cluster with your virtualization environment.

All that is required here is to enable all roles except for mdns (not used in a clustered or cloud-hosted environment):

$ kato node setup core api.hostname.example.com
$ kato role add --all-but mdns

Three-Node Cluster Example

This is exactly the configuration detailed above, being the smallest practical cluster deployment. To review:

  • 1 Core node consisting of primary, controller, router, and stager (and supporting processes)
  • 1 data-services node running the database, messaging and filesystem services
  • 1 DEA (Droplet Execution Agent) node

Five-Node Cluster Example

A typical small Stackato cluster deployment might look like this:

  • 1 Core node consisting of primary, controller, router, and stager (and supporting processes)
  • 1 data-services node running the database, messaging and filesystem services
  • 3 DEA (Droplet Execution Agent) nodes

Roles Requiring Persistent or Shared Storage

Though all roles can run using the VM's default filesystem, in production clusters some roles should always be backed by a persistent filesystem (block storage/EBS volumes) to provide scalable storage space and easy snapshotting. Nodes with the following roles should have their /var/vcap/services directory on persistent storage:

  • Data Services: MySQL, PostgreSQL, MongoDB, Redis
  • Filesystem Service
  • Memcache
  • RabbitMQ
  • Harbor

Note

Though Memcache and Redis are in-memory data stores, system service info data is stored on disk, so backing them with a persistent filesystem is recommended.

In clusters with multiple Cloud Controllers, the nodes must share a common /var/vcap/shared mount point as described below in order to work together properly.

Optionally, DEA and Stager nodes can be backed with a persistent filesystem. If the DEAs and Stagers share the same /var/vcap/shared directory, the DEAs will be able to use 'local' copies of the droplets rather than downloading them from the Stager via HTTP, which should speed up application deployment.

See the Persistent Storage documentation for instructions on relocating service data, application droplets, and containers.

Removing Nodes from Cluster

Use the kato node remove to remove a node from the cluster. Run the following command on the core node.

$ kato node remove NODE_IP

Port Configuration

The Stackato micro cloud runs with the following ports exposed:

Port Type Service
22 tcp ssh
25 tcp smtp
80 tcp http
111 tcp portmapper
111 udp portmapper
443 tcp https
3306 tcp mysql
5432 tcp postgresql
9001 tcp supervisord
12345 tcp netbus

On a production cluster, or a micro cloud running on a cloud hosting provider, only ports 22 (SSH), 80 (HTTPS) and 443 (HTTPS) need to be exposed externally (e.g. for the Router / Core node).

Within the cluster (i.e. behind the firewall), it is advisable to allow communication between the cluster nodes on all ports. This can be done safely by using the security group / security policy tools provided by your hypervisor:

If you wish to restrict ports between some nodes (recommended only if you do not have the option to use security groups), the following summary describes which ports are used by which services:

Port Range Type Config on Inbound Required by
0 - 65535 (all) tcp dea controller stackato ssh
4222 tcp controller all nodes NATS
5454 tcp controller all nodes redis
7000 - 7999 tcp all nodes all nodes kato log tail
8046 tcp controller all nodes doozerd
9001 tcp all nodes controller supervisord
9022 tcp controller dea droplets
9022 tcp dea controller droplets

If you subscribe to the principle of "defense in depth", each node can also be internally firewalled using iptables to apply the above rules.

Comments:

  • Port 22 can be restricted if necessary to the subnet you expect to connect from.
  • Ports 80 and 443 need only be open to the world on router nodes.
  • Port 4222 should be open for NATS, and port 9022 should also be open to allow transfer of droplets to and from the Cloud Controller.
  • Port 7845 is only required if you plan to stream logs from all nodes in a cluster using kato log tail command.
  • The DEA nodes are a special case where stackato ssh functionality requires the DEA to allow the Cloud Controller on all ports.
  • If you are providing the stackato ssh feature to your users, you might find it useful to define a distinct security group for the public-facing Cloud Controller node that is the same as a generic Stackato group, but has the additional policy of allowing SSH (Port 22) from hosts external to the cluster.

Note

Harbor (Port Service) Node Configuration

The optional Harbor TCP/UDP port service must be set up on a node with a public network interface if you wish to enable port forwarding for user applications. The security group or firewall settings for this node should make the configured port range accessible publicly. See Harbor Setup for full configuration instructions.

Multiple Controllers

A Stackato cluster can have multiple controller nodes running on separate VMs to improve redundancy. The key element in designing this redundancy is to have all controller nodes share a /var/vcap/shared directory stored on a high-availability filesystem server. For example:

  • Create a shared filesystem on a Network Attached Storage device. [1]

  • Stop the controller process on the Core node before proceeding further:

    $ kato stop controller
  • On the Core node and each additional controller node:

    • Create a mount point:

      $ sudo mkdir /mnt/controller
    • Mount the shared filesystem on the mount point. [1]

    • Set aside the original /var/vcap/shared:

      $ mv /var/vcap/shared /var/vcap/shared.old
    • Create a symlink from /var/vcap/shared to the mount point.

      $ ln -s /mnt/controller /var/vcap/shared
  • On the Core node, start the controller process:

    $ kato start controller
  • Run the following command on the additional Controller nodes to enable only the controller process:

    $ kato node attach -e controller CORE_IP
    
[1](1, 2) The type of filesystem, storage server, and network mount method are left to the discretion of the Stackato administrator.

Load Balancer and Multiple Routers

For large scale deployments requiring multiple Router nodes, a Load Balancer must be configured to distribute connections between the Routers. The Stackato VM can be configured to take on this role.

Note

A node configured as a Load Balancer cannot have any other roles enabled.

The kato node setup load_balancer command retrieves IP addresses of every router in the cluster and configures an nginx process to distribute load (via round-robin) among a pool of Routers and handle SSL termination.

For example, to setup a cluster with a Load Balancer and multiple Routers:

Rename the Load Balancer

The Load Balancer is the primary point of entry to the cluster. It must have a public-facing IP address and take on the primary hostname for the system as configured in DNS. Run the following on Load Balancer node:

$ kato node rename hostname.example.com

Set up the Core Node

The Core node will need to temporarily take on the primary hostname of the Stackato system (i.e. the same name as the Load Balancer above). Run the following on the Core node:

$ kato node rename hostname.example.com

If it is not already configured as the Core node, do so now:

$ kato node setup core api.hostname.example.com

The kato node rename command above is being used to set internal Stackato parameters, but all hosts on a network should ultimately have unique hostnames. After setup, rename the Core node manually by editing /etc/hostname and restarting.

Set up Supplemental Routers

As with the Core node, you will need to run kato node rename on each router with the same primary hostname. Run the following on each Router:

$ kato node rename hostname.example.com

Then enable the 'router' role and attach the node to the cluster:

$ kato node attach -e router <MBUS_IP>

As above, rename each host manually after configuration to give them unique hostnames. The MBUS_IP is for the network interface of the Core node (usually eth0).

Configure the Load Balancer

Attach the Load Balancer to the Core node and enable the 'router' role. On the Load Balancer node, run:

$ kato node attach -e router <MBUS_IP>

Then set up the node as a Load Balancer:

$ kato node setup load_balancer

This command will fetch the IP addresses for all configured routers in the cluster. It will prompt you to remove the IP address of the local Load Balancer from the pool of Routers (recommended).

Note

If you are using the AOK authentication service, see also AOK with a Load Balancer

SSL Certificates

Since the Load Balancer terminates SSL connections, SSL certificates must be set up and maintained on this node. See the Using your own SSL certificate and CA Certificate Chaining sections for instructions.