This process begins with an installed micro cloud, which must then be cloned across several nodes. You connect to each node in turn and tell it which roles it is to serve, thereby distributing the processing load for maximum performance.
A Stackato node can take on one or more of the following roles:
You can see a list of the available roles at the command line by running the kato info command.
The command line tool used to configure Stackato servers is called kato. Setup of cluster nodes is done primarily using the kato node setup, add, attach, and remove sub-commands.
The kato info command will show:
In a Stackato cluster, one node is dedicated as the Core node. This node will have a controller, primary, base, and router role but can also include additional roles.
Choose one VM running the default Stackato micro cloud as the Core node. Set up the Core node first. Then you can proceed to add other node types.
Note
A static IP address is necessary to provide a consistent network interface for other nodes to connect to. A real DNS record is also recommended for production use. You must set these values and reboot the VM before proceeding.
First, take note of the IP address of the Core node. It will be required when configuring additional nodes in the following steps, so that they can attach to the Core node. Make sure that the IP address of its eth0 interface is registering the correct address, which may not be the case if you have set a static IP and not yet rebooted or restarted networking. To check the IP address, run:
$ ifconfig eth0
To set the static IP address, run:
$ kato op static_ip
Note
If the IP address of the Core node changes, the kato node migrate command must be run on all nodes in the cluster (starting with the Core node) to set the new CORE_IP.
Next, set the fully qualified hostname of the Core node. This is required so that Stackato's internal configuration matches the DNS record created for this system.
To set the hostname, run:
$ kato node rename hostname.example.com
On the Core node, execute the following command:
$ kato node setup core api.hostname.example.com
This sets up the Core node with just the implicit controller, primary, and router roles.
If you intend to set up the rest of the cluster immediately, you would carry on to enable those roles you ultimately intend to run on the Core node. For example, to set up a Core node with the controller, primary router, and stager roles:
$ kato node setup core api.hostname.example.com $ kato role add stager
Then proceed to configure the other VMs by attaching them to the Core node and assigning their particular roles.
In smaller clusters, the Router role can be run on the Core Node. To run its own on a separate node:
$ kato node attach -e router CORE_IP
Note that the public DNS entry for the Stackato cluster's API endpoint must resolve to the Router if it is separate from the Core Node. For clusters requiring multiple Routers, see the Load Balancer and Multiple Routers section below.
Data services can share a single node (small clusters) or run on separate nodes (recommended for production clusters). To set up all available data services on a single node and attach it to the Core node, run the following command on the data services node:
$ kato node attach -e data-services CORE_IP
Note
The Harbor port service needs a publicly routable IP and exposed port range if you want to provide externally accessible TCP and UDP ports for user applications. See the Harbor Requirements & Setup documentation for details.
In smaller clusters, a single Stager role running on the Core node (or a DEA node) should be sufficient.
On systems where there are likely to be several applications staged concurrently, multiple stagers can be configured on separate nodes (running just the Stager role or sharing a node with the DEA role). Stackato will distribute staging tasks between the Stagers.
To turn a generic Stackato VM into a Stager and connect it to the Core node:
$ kato node attach -e stager CORE_IP
An application "worker" node is called a Droplet Execution Agent (DEA). Once the controller node is running, you can begin to add some of these nodes with the kato node attach command. To turn a generic Stackato VM into a DEA and connect it to the Core node:
$ kato node attach -e dea CORE_IP
Continue this process until you have added all the desired DEA nodes.
To verify that all the cluster nodes are configured as expected, run the following command on the Core node:
$ kato status --all
Once cluster nodes are connected to the Core node, roles can be enabled or disabled using the Cluster Admin interface in the Management Console.
This is a trivial case which you would not deploy in production, but it helps to illustrate the role architecture in Stackato, and can be useful diagnostically.
Technically, it is a cluster, even though it consists of a single node. A node in this configuration will function much like a micro cloud and yet integrate as a cluster with your virtualization environment.
All that is required here is to enable all roles except for mdns (not used in a clustered or cloud-hosted environment):
$ kato node setup core api.hostname.example.com $ kato role add --all-but mdns
This is exactly the configuration detailed above, being the smallest practical cluster deployment. To review:
A typical small Stackato cluster deployment might look like this:
Use the kato node remove to remove a node from the cluster. Run the following command on the core node.
$ kato node remove NODE_IP
The Stackato micro cloud runs with the following ports exposed:
Port Type Service 22 tcp ssh 25 tcp smtp 80 tcp http 111 tcp portmapper 111 udp portmapper 443 tcp https 3306 tcp mysql 5432 tcp postgresql 9001 tcp supervisord 12345 tcp netbus
On a production cluster, or a micro cloud running on a cloud hosting provider, only ports 22 (SSH), 80 (HTTPS) and 443 (HTTPS) need to be exposed externally (e.g. for the Router / Core node).
Within the cluster (i.e. behind the firewall), it is advisable to allow communication between the cluster nodes on all ports. This can be done safely by using the security group / security policy tools provided by your hypervisor:
If you wish to restrict ports between some nodes (recommended only if you do not have the option to use security groups), the following summary describes which ports are used by which services:
Port Range Type Config on Inbound Required by 0 - 65535 (all) tcp dea controller stackato ssh 4222 tcp controller all nodes NATS 5454 tcp controller all nodes redis 7000 - 7999 tcp all nodes all nodes kato log tail 8046 tcp controller all nodes doozerd 9001 tcp all nodes controller supervisord 9022 tcp controller dea droplets 9022 tcp dea controller droplets
If you subscribe to the principle of "defense in depth", each node can also be internally firewalled using iptables to apply the above rules.
Comments:
Note
Harbor (Port Service) Node Configuration
The optional Harbor TCP/UDP port service must be set up on a node with a public network interface if you wish to enable port forwarding for user applications. The security group or firewall settings for this node should make the configured port range accessible publicly. See Harbor Setup for full configuration instructions.
A Stackato cluster can have multiple controller nodes running on separate VMs to improve redundancy. The key element in designing this redundancy is to have all controller nodes share a /var/vcap/shared directory stored on a high-availability filesystem server. For example:
Create a shared filesystem on a Network Attached Storage device. [1]
Stop the controller process on the Core node before proceeding further:
$ kato stop controller
On the Core node and each additional controller node:
Create a mount point:
$ sudo mkdir /mnt/controller
Mount the shared filesystem on the mount point. [1]
Set aside the original /var/vcap/shared:
$ mv /var/vcap/shared /var/vcap/shared.old
Create a symlink from /var/vcap/shared to the mount point.
$ ln -s /mnt/controller /var/vcap/shared
On the Core node, start the controller process:
$ kato start controller
Run the following command on the additional Controller nodes to enable only the controller process:
$ kato node attach -e controller CORE_IP
[1] | (1, 2) The type of filesystem, storage server, and network mount method are left to the discretion of the Stackato administrator. |
For large scale deployments requiring multiple Router nodes, a Load Balancer must be configured to distribute connections between the Routers. The Stackato VM can be configured to take on this role.
Note
A node configured as a Load Balancer cannot have any other roles enabled.
The kato node setup load_balancer command retrieves IP addresses of every router in the cluster and configures an nginx process to distribute load (via round-robin) among a pool of Routers and handle SSL termination.
For example, to setup a cluster with a Load Balancer and multiple Routers:
The Load Balancer is the primary point of entry to the cluster. It must have a public-facing IP address and take on the primary hostname for the system as configured in DNS. Run the following on Load Balancer node:
$ kato node rename hostname.example.com
The Core node will need to temporarily take on the primary hostname of the Stackato system (i.e. the same name as the Load Balancer above). Run the following on the Core node:
$ kato node rename hostname.example.com
If it is not already configured as the Core node, do so now:
$ kato node setup core api.hostname.example.com
The kato node rename command above is being used to set internal Stackato parameters, but all hosts on a network should ultimately have unique hostnames. After setup, rename the Core node manually by editing /etc/hostname and restarting.
As with the Core node, you will need to run kato node rename on each router with the same primary hostname. Run the following on each Router:
$ kato node rename hostname.example.com
Then enable the 'router' role and attach the node to the cluster:
$ kato node attach -e router <MBUS_IP>
As above, rename each host manually after configuration to give them unique hostnames. The MBUS_IP is for the network interface of the Core node (usually eth0).
Attach the Load Balancer to the Core node and enable the 'router' role. On the Load Balancer node, run:
$ kato node attach -e router <MBUS_IP>
Then set up the node as a Load Balancer:
$ kato node setup load_balancer
This command will fetch the IP addresses for all configured routers in the cluster. It will prompt you to remove the IP address of the local Load Balancer from the pool of Routers (recommended).
Note
If you are using the AOK authentication service, see also AOK with a Load Balancer
Since the Load Balancer terminates SSL connections, SSL certificates must be set up and maintained on this node. See the Using your own SSL certificate and CA Certificate Chaining sections for instructions.