Major version upgrades of Stackato require a data migration to a new VM or cluster, but patch releases can be applied in place using the kato patch command.
To see a list of updates available from ActiveState, run the following command on any Stackato VM:
$ kato patch status
The command will list the updates available. For example:
2 updates available to be installed.
Known updates for Stackato 2.10.4:
dea-memory-usage-reporting: Fix the reporting of stackato stats usage on the DEA end.
severity: required
roles affected: dea
vsphere-autoscaling-fix: Fix VSphere autoscaling behavior.
severity: required
roles affected: controller, primary
To apply all patches to all relevant cluster nodes:
$ kato patch install
To apply a particular patch, specify it by name:
$ kato patch install dea-memory-usage-reporting
Applying patches will automatically restart all patched roles. To prevent this, use the --no-restart option.
To apply a patch only to the local Stackato VM (not the whole cluster), use the --only-this-node option.
This section describes backing up Stackato data and importing it into a new Stackato system. The export/import cycle is required for:
Before deciding on a backup, upgrade or migration strategy, it's important to understand what data the Stackato system can save, and what may have to be reset, redeployed, or reconfigured. This is especially important when migrating to a new cluster.
Stackato can export and import data from built-in data services running on Stackato nodes, but it has no mechanism to handle data in external databases (unless kato export|import has also been modified to recognize the custom service).
Backing up or moving such databases should be handled separately, and user applications should be reconfigured and/or redeployed to connect properly to the new database host if the database is not implemented as a Stackato data service.
Applications which write database connection details during staging rather than taking them from environment variables at run time, must be re-staged (e.g. redeployed or updated) to pick up the new service location and credentials. Restarting the application will not automatically force restaging.
Droplet Execution Agent (DEA) nodes are not migrated directly from old nodes to new nodes. Instead, the application droplets (zip files containing staged applications) are re-deployed to new DEA nodes from the Controller.
Data export is done with the kato data export command. The command can export:
Start by logging into the VM via ssh:
$ ssh [email protected]
A single-node micro cloud VM can be backed up with a single command:
$ kato data export --only-this-node
A clustered setup can be backed up with a single command:
$ kato data export --cluster
Once the export completes, you can use scp or another utility (e.g. sftp, rsync) to move the .tgz file to another system, or save the file directly to a mounted external filesystem by specifying the full path and filename during export (see backup example below).
Regular backup of controller data, apps, droplets, and service data is recommended for any production system. Implementation of a regular backup routine is left to the discretion of the Stackato administrator, but using cron/crontab is one simple way is to automate this. For example, you could create an entry like the following in the root user's crontab on the filesystem node:
0 3 * * * su - stackato /bin/bash -c '/home/stackato/bin/kato data export --cluster /mnt/nas/stackato-backup.tgz'
This runs kato data export --cluster every morning at 3AM as root using the stackato user's login environment (required) and saves a .tgz file to a mounted external filesystem.
Scheduled (non-interactive) backups using the kato export command will need to be run by root as some shell operations performed in the export require sudo when run interactively. For clusters, passwordless SSH key authentication between the Core node and all other nodes will also need to be set up. The command should be run on the node hosting the 'filesystem' role, as some shell commands need to be run locally for that service.
To import a Stackato dump, transfer the file to the target VM.
Login to the Stackato VM and run kato data import with the relevant options. For example, to import all data into a new cluster:
$ kato data import --cluster stackato-export-xxxxxxxxxx.tgz
The command can also import data remotely from a running Stackato system. For example:
$ kato data import --cluster stackato-host.example.com
New releases of Stackato will often include upgrades to the bundled database engines. This can cause problems for existing applications which reference versioned database names in the VCAP_SERVICES environment variable to connect to provisioned data services.
Note
As of Stackato 2.2, VCAP_SERVICES will no longer include the version numbers in the service name string.
There are two application level fixes for this issue:
Update references to VCAP_SERVICES in the application code to exclude version numbers. For example:
MySQL: 'mysql-5.x' -> 'mysql'
PostgreSQL: 'postgresql-x.x' -> 'postgresql'
Redis: 'redis-2.x' -> 'redis'
Update the application code to use the DATABASE_URL environment variable. See Using Database Services for general information and the following language-specific documentation:
The following changes to sample applications show this modification:
Updating an app can create downtime while the new code is being staged. URL mapping can be used to reduce this downtime by switching between two running versions of an app.
For example, we have the customertracker app:
$ stackato apps
+-----------------+---+---------+-------------------------------------+------------+
| Application | # | Health | URLS | Services |
+-----------------+---+---------+-------------------------------------+------------+
| customertracker | 1 | RUNNING | customertracker.stackato-xxxx.local | customerdb |
+-----------------+---+---------+-------------------------------------+------------+
The first time you do this, map a new URL to the existing app to ensure it continues to run once the main URL has been remapped (for future updates you will already have two):
$ stackato map customertracker customertracker1.stackato-xxxx.local
Push the updated code with a new application name:
$ stackato push customertracker2
...
$ stackato apps
+------------------+---+---------+--------------------------------------+------------+
| Application | # | Health | URLS | Services |
+------------------+---+---------+--------------------------------------+------------+
| customertracker | 1 | RUNNING | customertracker.stackato-xxxx.local | customerdb |
| | | | customertracker1.stackato-xxxx.local | |
| customertracker2 | 1 | RUNNING | customertracker2.stackato-xxxx.local | customerdb |
+------------------+---+---------+--------------------------------------+------------+
Note that the configured service(s) should be named the same, which will be automatically connected to the existing service(s).
Next, unmap the URL from the current app:
$ stackato unmap customertracker customertracker.stackato-xxxx.local
And immediately map it to the new app:
$ stackato map customertracker2 customertracker.stackato-xxxx.local
$ stackato apps
+------------------+---+---------+--------------------------------------+------------+
| Application | # | Health | URLS | Services |
+------------------+---+---------+--------------------------------------+------------+
| customertracker | 1 | RUNNING | customertracker1.stackato-xxxx.local | customerdb |
| customertracker2 | 1 | RUNNING | customertracker.stackato-xxxx.local | customerdb |
| | | | customertracker2.stackato-xxxx.local | |
+------------------+---+---------+--------------------------------------+------------+
Lastly, delete the old app:
$ stackato delete customertracker
$ stackato apps
+------------------+---+---------+--------------------------------------+------------+
| Application | # | Health | URLS | Services |
+------------------+---+---------+--------------------------------------+------------+
| customertracker2 | 1 | RUNNING | customertracker.stackato-xxxx.local | customerdb |
| | | | customertracker2.stackato-xxxx.local | |
+------------------+---+---------+--------------------------------------+------------+
Though Stackato has an internal mechanism for supervising processes on a server or cluster (Supervisor), it is advisable to add some external monitoring for production systems. Nagios is a free, open source system monitoring tool that can provide this external monitoring.
Below is an example Nagios config for a small cluster running on Amazon EC2 which monitors system load, free disk space and SSH connectivity.
define host {
use important-host
host_name ec2-xxx.us-west-2.compute.amazonaws.com
}
define host {
use important-host
host_name ec2-xxx.us-west-2.compute.amazonaws.com
}
define host {
use important-host
host_name ec2-xxx.us-west-2.compute.amazonaws.com
}
define host {
name important-host ; The name of this host template
notifications_enabled 1 ; Host notifications are enabled
event_handler_enabled 1 ; Host event handler is enabled
flap_detection_enabled 1 ; Flap detection is enabled
failure_prediction_enabled 1 ; Failure prediction is enabled
process_perf_data 1 ; Process performance data
retain_status_information 1 ; Retain status information across program restarts
retain_nonstatus_information 1 ; Retain non-status information across program restarts
register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL HOST, JUST A TEMPLATE!
check_command check-host-alive
max_check_attempts 10
notification_interval 120
notification_period 24x7
notification_options d,r
contact_groups admins
}
define service {
use generic-service
host_name ec2-xxx.us-west-2.compute.amazonaws.com, ec2-xxx.us-west-2.compute.amazonaws.com, ec2-xxx.us-west-2.compute.amazonaws.com
service_description disk_free
is_volatile 0
check_period 24x7
max_check_attempts 4
normal_check_interval 5
retry_check_interval 1
contact_groups sandbox
notification_options w,u,c,r
notification_interval 960
notification_period 24x7
check_command check_remote_disks
}
define service {
use generic-service
host_name ec2-xxx.us-west-2.compute.amazonaws.com, ec2-xxx.us-west-2.compute.amazonaws.com, ec2-xxx.us-west-2.compute.amazonaws.com
service_description LOAD
is_volatile 0
check_period 24x7
max_check_attempts 4
normal_check_interval 5
retry_check_interval 1
contact_groups sandbox
notification_options w,u,c,r
notification_interval 960
notification_period 24x7
check_command check_remote_load
}
define service {
use generic-service
host_name ec2-xxx.us-west-2.compute.amazonaws.com, ec2-xxx.us-west-2.compute.amazonaws.com, ec2-xxx.us-west-2.compute.amazonaws.com
service_description SSH
is_volatile 0
check_period 24x7
max_check_attempts 4
normal_check_interval 5
retry_check_interval 1
contact_groups sandbox
notification_options w,u,c,r
notification_interval 960
notification_period 24x7
check_command check_ssh
}
Detailed instructions on installing and configuring Nagios can be found in the Nagios Core Documentation
Cloud hosting providers have different default partition sizes and configurations. The default root volumes on some cloud hosted VM instances are often fairly small and are usually ephemeral. Data service and filesystem nodes should always be backed by some kind of persistent storage, with enough free filesystem space to accommodate the projected use of the services.
The Persistent storage section in the EC2 AMI guide provides an example of how to relocate services data to an EBS volume. The general case is covered below.
To move database services, application droplets, and application containers to larger partitions:
For example:
$ kato stop
...
$ kato relocate services /mnt/ebs/services
...
$ kato relocate droplets /mnt/ebs/droplets
...
$ kato relocate containers /mnt/containers
...
Note
For performance reasons, Stackato containers should not be relocated to EBS volumes.
The Stackato filesystem quotas cannot be enforced by the system unless they are mounted on partitions which support Linux quotas. This may need to be specified explicitly when running the mount command. The kato relocate command will warn if this is necessary.
For the example above, the mount step might look like this:
$ sudo mount -o remount,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 /mnt/containers
$ sudo quotacheck -vgumb mountpoint /mnt/containers
$ sudo quotaon -v mountpoint /mnt/containers
To ensure the quotas are preserved after reboot, edit /etc/init.d/setup_stackato_lxc to include mount commands for each partition. The example above would require a block such as this:
# enable quotas for Stackato containers
if [[ -f "/mnt/containers/aquota.user" ]]; then
mount -o remount,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 /mnt/containers
quotaon -v /mnt/containers
fi
When a database needs to be populated with data the first time it is run, it can be done by the use of a hook during the staging process. This can be accomplished in two steps.
First, create a script file in the app's root directory that uses the same data source variables from STACKATO_SERVICES as the ones being used in the app. This file will open a connection to the database, create tables, and insert records as necessary, as in this Perl example:
use strict;
use warnings;
use DBI;
use DBD::mysql;
use JSON "decode_json";
my $services = decode_json($ENV{STACKATO_SERVICES});
my $credentials = $services->{mydb};
my $dbh = DBI->connect("DBI:mysql:database=$credentials->{name};hostname=$credentials->{hostname};port=$credentials->{port};",
$credentials->{'user'}, $credentials->{'password'})
or die "Unable to connect: $DBI::errstr\n";
my $sql_init =
'CREATE TABLE customers (
id INT(11) AUTO_INCREMENT PRIMARY KEY,
customername TEXT,
created DATETIME
);
';
$dbh->do($sql_init);
$sql_init =
'INSERT INTO customers
(customername, created)
VALUES
("John Doe", now()),
("Sarah Smith", now());
';
$dbh->do($sql_init);
$dbh->disconnect;
Next, modify your stackato.yml file to make use of the post-staging hook which will execute a command to run the script:
name: customertracker
services:
mysql: customerdb
hooks:
post-staging: perl preload.pl
With those changes, the data from your script will be executed after the staging process is complete but before the app starts to run.
To export a MySQL database, use the stackato run command to remotely execute the dbexport tool:
$ stackato run [application-name] dbexport service-name > dumpfile.sql
This will run a dbexport of the named data service remotely and direct the output to a local file. If run from a directory containing the stackato.yml file, the application name may be omitted.
Note
This method of database backup is available for compatibility with Cloud Foundry. It tends to be slower than using stackato run ....
To back up a MySQL database, use the tunnel command to make a connection to the server and export the data using mysqldump.
Use the tunnel command to access the service (in this example a MySQL database named customerdb):
$ stackato tunnel customerdb
Password: ********
Getting tunnel url: OK, at https://tunnel-xxxxx.stackato-xxxx.local
Getting tunnel connection info: OK
Service connection info:
+----------+-----------------------------------+
| Key | Value |
+----------+-----------------------------------+
| username | uT9efVVFCk |
| password | pHFitpIU1z |
| name | d5eb2468f70ef4997b1514da1972 |
+----------+-----------------------------------+
1. none
2. mysql
3. mysqldump
Which client would you like to start?
Select option 3. mysqldump. You will be prompted to enter a path to where the dump will be saved.
See the tunnel command documentation for other ways of accessing a MySQL database. See Importing a MySQL database for details on importing a file created by mysqldump into an existing MySQL database service.
To import a MySQL database, use the stackato dbshell command:
$ stackato dbshell [application name] [service name] < dumpfile.sql
This command redirects the contents of a local database dump file to the appropriate database client running in the application instance (i.e. equivalent to stackato run dbshell ...). If run from a directory containing the stackato.yml file, the application and service names may be omitted.
Note
This method of database import is available for compatibility with Cloud Foundry. It tends to be slower than using stackato run ....
To import data from a mysqldump into an existing MySQL database service, use the tunnel command:
$ stackato tunnel <servicename>
Password: ********
Getting tunnel url: OK, at https://tunnel-xxxxx.stackato-xxxx.local
Getting tunnel connection info: OK
Service connection info:
+----------+-----------------------------------+
| Key | Value |
+----------+-----------------------------------+
| username | uT9efVVFCk |
| password | pHFitpIU1z |
| name | d5eb2468f70ef4997b1514da1972 |
+----------+-----------------------------------+
1. none
2. mysql
3. mysqldump
Which client would you like to start?
Choose option 1. none which will allow for command line access to the database. A MySQL service is configured on Port 10000, so open a new Terminal window to enter commands with.
Then, import an SQL file with the following command:
$ mysql --protocol=TCP --host=localhost --port=10000 --user=<user> --password=<pass> <name> < mydatabase.sql
See the tunnel command documentation for other ways of accessing a MySQL database. See Backing up a MySQL database for details on how to create a mysqldump backup that can then be imported into another database service.
The Stackato client targets a single location with the command stackato target.
If you need to target two or more instances at the same time, use one of the following methods:
Use the --target <target> option. This sets the specified target for the current command only, and does not set it as the default:
$ stackato apps --target api.stackato-xxx1.local
Use two or more terminals to access multiple targets. Within each terminal, set the STACKATO_TARGET environment variable for the API endpoint URL you want to work with in that terminal. The client will use this URL, overriding any target set with the stackato target command:
$ export STACKATO_TARGET='api.stackato-xxx2.local'
This target is used until the variable is unset or the terminal is closed. To unset it:
$ unset STACKATO_TARGET
For apps using Python and Buildpack, it is possible to cache assets required for staging. This speeds up the deployment of updates because the resources do not need to be downloaded and/or compiled each time.
In order to make this happen, add a filesystem: service named ${name}-cache to your stackato.yml file where ${name} is your application name:
name: FOO
services:
${name}-cache: filesystem
This will create a filesystem service named FOO-cache. If this filesystem service exists during the staging process, resources are copied to it and used when updates to the app are pushed. Using variable key substitution, allows you to push multiple instances of the same app with their own bound services. Find an example of this process in the provided link above. .. warning:
Key substitution for yaml key names is only available for client 1.4.3 and up.
See :ref:`min_version <stackato_yml-version>` section on how to enforce minimum client
versioning your stackato.yml file.
You can also manually set the caching service name in the stackato.yml file. Any application created using this stackato.yml file will then share a FOO-cache filesystem. This is NOT recommended. See below.
Note
To use this feature, the size of the application must not exceed the the maximum size of the filesystem service. The default filesystem size is 100MB, but this can be expanded. See Adjust the Default Size of File System.