Troubleshooting Cloud Foundry
Page last updated: May 29, 2015
This guide provides help with diagnosing and resolving issues encountered when installing and running Cloud Foundry.
Troubleshooting Issues Installing Cloud Foundry
An installation or update can fail for reasons that have nothing to do with the software that you are installing. If an installation or update fails once, start over and try again. If it fails a second time, use the following information to troubleshoot the issue.
Timeouts in “creating bound missing vms” phase
When deploying Cloud Foundry with BOSH, the “creating bound missing vms” phase occurs after package compilation. A process in the “creating bound missing vms” phase can time out if a BOSH Agent fails to start correctly or if the Agent cannot connect to the NATS message bus.
. . . Started preparing package compilation > Finding packages to compile. Done (00:00:00) Started creating bound missing vms Started creating bound missing vms > api_worker_z1/0. Failed: Timed out pinging to 1da06ba3de2f after 600 seconds (00:12:45) Error 450002: Timed out pinging to 1da06ba3de2f after 600 seconds
Perform the following steps to determine the cause of the time out:
- Use your IaaS console to make sure the timed-out VM is booting correctly
- Check the Agent log on the VM that timed out for a “handshake” connection between the BOSH Director and the Agent
- Use the Netcat networking utility to test for routing issues to the NATS IP and port from the VM that timed out
Use your IaaS console to make sure the timed-out VM is booting correctly
For details on how to use your IaaS console to make sure the timed out VM is booting correctly, see your IaaS documentation.
Check the Agent log on the VM that timed out for a “handshake” connection between the BOSH Director and the Agent
Use your IaaS virtualization console to open a terminal window on the VM that timed out and log in as root.
Open the
/var/vcap/bosh/log/current
log file in a text editor.Search the log file for a “handshake” between the BOSH Director and the BOSH Agent. This connection is represented in the log as a
ping
and apong
:. . . 2013-10-03_14:35:48.58456 #[608] INFO: Message: {"method"=>"ping", "arguments"=>[], reply_to"=>"director.b668-1660944090e4"} 2013-10-03_14:35:48.60182 #[608] INFO: reply_to:director.b668-1660944090e4: payload: {:value=>"pong"}
If the handshake does not complete, the Agent cannot communicate with the Director.
Use Netcat to test for routing issues to the NATS IP
Use your IaaS virtualization console to open a terminal window on the VM that timed out and log in as root.
Open the
/var/vcap/bosh/log/current
log file in a text editor.Search the beginning of the log file for a line labeled INFO: loaded new infrastructure settings. This line contains a JSON blob of key/value pairs representing the expected infrastructure for the BOSH Agent.
In this line, locate the IP address and port following nats://nats:nats@.
. . . 2013-10-03_14:35:21.83222 #[608] INFO: loaded new infrastructure settings: {"vm"=>{"name"=>"vm-4d80ede4-b0a5", "id"=>"vm-360"}, {"user"=>"agent", "password"=>"agent"}}, "mbus"=>"nats://nats:nats@192.168.86.17:4222", "env"=>{"bosh"}}}}}
Run the Netcat command
nc -v IP-ADDRESS PORT
to determine whether it is possible to establish a connection between the NATS message bus and this VM.$ nc -v 192.168.86.17 4222 Connection to 192.168.86.17 4222 port [tcp/*] succeeded!
Out of Disk Space Error
If, during installation, log files fill all available disk space on a computer, the operating system reports an “Out of Disk Space” error.
To resolve this issue, delete these log files from the tmp
directory of the
affected computer manually or by rebooting.
Troubleshooting Issues Running Cloud Foundry
Use the BOSH CLI for Troubleshooting
Run
bosh target DIRECTOR-IP-ADDRESS
and provide your credentials to log into the BOSH Director.$ bosh target 192.168.86.10 Target set to 'bosh' Your username: admin Enter password: ***** Logged in as 'admin'
Use the following BOSH commands to troubleshoot your deployment:
- VMS: Lists all VMs in a deployment
- Cloudcheck: Cloud consistency check and interactive repair
- SSH: Start an interactive session or execute commands with a VM
BOSH VMS
bosh vms
provides an overview of the virtual machines BOSH is managing as part of the current deployment.
$ bosh vms +-----------------------------------+---------+---------------------------------+---------------+ | Job/index | State | Resource Pool | IPs | +-----------------------------------+---------+---------------------------------+---------------+ | unknown/unknown | running | push-apps-manager | 192.168.86.13 | | unknown/unknown | running | smoke-tests | 192.168.86.14 | | cloud_controller/0 | running | cloud_controller | 192.168.86.23 | | collector/0 | running | collector | 192.168.86.25 | | consoledb/0 | running | consoledb | 192.168.86.29 | | dea/0 | running | dea | 192.168.86.47 | | health_manager/0 | running | health_manager | 192.168.86.20 | | loggregator/0 | running | loggregator | 192.168.86.31 | | loggregator_trafficcontroller/0 | running | loggregator_trafficcontroller | 192.168.86.32 | | nats/0 | running | nats | 192.168.86.19 | | nfs_server/0 | running | nfs_server | 192.168.86.21 | | router/0 | running | router | 192.168.86.16 | | saml_login/0 | running | saml_login | 192.168.86.28 | | syslog/0 | running | syslog | 192.168.86.24 | | uaa/0 | running | uaa | 192.168.86.27 | | uaadb/0 | running | uaadb | 192.168.86.26 | +-----------------------------------+---------+---------------------------------+---------------+
bosh vms
may show a VM in an unknown state.
Run bosh cloudcheck
on VMs in an unknown state to have BOSH attempt to
diagnose the problem.
You can also use bosh vms
to identify VMs in your deployment, then use
bosh ssh
to SSH into an identified VM for further troubleshooting.
bosh vms
supports the following arguments:
- –details: Overview also includes Cloud ID, Agent ID, and whether or not the BOSH Resurrector has been enabled for each VM
- –vitals: Overview also includes load, CPU, memory usage, swap usage, system disk usage, ephemeral disk usage, and persistent disk usage for each VM
- –dns: Overview also includes the DNS A record for each VM
BOSH Cloudcheck
bosh cloudcheck
attempts to detect differences between the VM state database
that the BOSH Director maintains and the actual state of the VMs.
For each difference detected, bosh cloudcheck
offers repair options:
Reboot VM
: Instructs BOSH to reboot a VM. Rebooting can resolve many transient errors.Ignore problem
: Instructsbosh cloudcheck
to do nothing. You might want to instructbosh cloudcheck
to ignore a problem in order to runbosh ssh
and attempt troubleshooting directly on the machine.Reassociate VM with corresponding instance
: Updates the BOSH Director state database. Use this option if you believe that the BOSH Director state database is in error and that a VM is correctly associated with a job.Recreate VM using last known apply spec
: Instructs BOSH to destroy a VM and recreate it from the deployment manifest the installer provides. Use this option if a VM is corrupted.Delete VM reference
: Instructs BOSH to delete a VM reference in the Director state database. If a VM reference exists in the state database, BOSH expects to find an agent running on the VM. Select this option only if you know this reference is in error. Once you delete the VM reference, BOSH can no longer control the VM.
Example Scenarios
Unresponsive Agent
$ bosh cloudcheck ccdb/0 (vm-3e37133c-bc33-450e-98b1-f86d5b63502a) is not responding: - Ignore problem - Reboot VM - Recreate VM using last known apply spec - Delete VM reference (DANGEROUS!)
Missing VM
$ bosh cloudcheck VM with cloud ID `vm-3e37133c-bc33-450e-98b1-f86d5b63502a' missing: - Ignore problem - Recreate VM using last known apply spec - Delete VM reference (DANGEROUS!)
Unbound Instance VM
$ bosh cloudcheck
VM vm-3e37133c-bc33-450e-98b1-f86d5b63502a' reports itself as
ccdb/0' but does not have a bound instance:
- Ignore problem
- Delete VM (unless it has persistent disk)
- Reassociate VM with corresponding instance
Out of Sync VM
$ bosh cloudcheck
VM vm-3e37133c-bc33-450e-98b1-f86d5b63502a' is out of sync: expected
cf-d7293430724a2c421061: ccdb/0', got `cf-d7293430724a2c421061: nats/0':
- Ignore problem
- Delete VM (unless it has persistent disk)
BOSH SSH
Use bosh ssh
to open secure shells into the VMs in your deployment.
To use bosh ssh
:
- Run
ssh-keygen -t rsa
to provide BOSH with the correct public key. - Accept the defaults.
- Run
bosh ssh
. - Select a VM to access.
- Create a password for the temporary user the
bosh ssh
command creates. Use this password if you need sudo access in this session.
$ bosh ssh 1. ha_proxy/0 2. nats/0 3. etcd_and_metrics/0 4. etcd_and_metrics/1 5. etcd_and_metrics/2 6. health_manager/0 7. nfs_server/0 8. ccdb/0 9. cloud_controller/0 10. clock_global/0 11. cloud_controller_worker/0 12. router/0 13. uaadb/0 14. uaa/0 15. login/0 16. consoledb/0 17. dea/0 18. loggregator/0 19. loggregator_trafficcontroller/0 20. push-apps-manager/0 21. smoke-tests/0Choose an instance: 17 Enter password (use it to sudo on remote host): ******* Target deployment `cf_services-2c3c918a135ab5f91ee1'
Setting up ssh artifacts Starting interactive shell on job loggregator_trafficcontroller/0
Viewing BOSH Logs
You can access BOSH logs by two methods:
- Using the
bosh ssh
command to access the log location - Using the
bosh logs
command to output the logs to standard output or to a file
Using the BOSH SSH Command
Use bosh ssh
to open secure shells into the VMs in your deployment, then
access the logs on the VM.
To use bosh ssh
:
- Run
ssh-keygen -t rsa
to provide BOSH with the correct public key. - Accept the defaults.
- Run
bosh ssh
. - Select a VM to access.
- Review the
/var/vcap/bosh/log/current
log file.
Using the BOSH Logs Command
Use bosh logs
to output BOSH logs to standard output or to a file.
To use bosh logs
:
- Run
bosh vms
to identify VMs in your deployment by job name and index. - Run
bosh logs JOB-NAME INDEX
to view the logs from the identified VM.
Logging into a Non-Responsive BOSH VM
A VM under heavy system load can stop responding to some commands but still
function in a limited way.
If the VM does not respond to bosh ssh
, use the following steps to open a
secure shell in a more direct manner:
- Run
bosh vms
and note the IP address of the non-responsive VM. Run
ssh -t vcap@IP-ADDRESS 'sh'
where IP-ADDRESS is the IP address of the non-responsive VM.$ bosh vms +---------------+---------+----------------+---------------+ | Job/index | State | Resource Pool | IPs | +---------------+---------+----------------+---------------+ | mysql_node/0 | unknown | mysql_node | 192.168.86.53 | +---------------+---------+----------------+---------------+ $ ssh -t [email protected] 'sh'
Terminating a BOSH SSH Session
Use ~.
, entered at the beginning of a new line, to terminate a bosh ssh
or
ssh
session.
To terminate a bosh ssh
or ssh
initiated from the jumpbox or other server,
use ~~.
, entered at the beginning of a new line.
The outermost secure shell session consumes the second ~
and passes the
remaining ~.
command to the inner ssh
session.
Debugging a Failing Job
- Run
bosh vms
to determine which job VMs in your deployment are failing. Note the job name and index of the failing VM. - Run
bosh ssh JOB-NAME/INDEX
to open a secure shell into the failing VM. - Run
sudo su -
to enter the root environment with root privileges. - Run
monit summary
to determine which processes are not running. - Review the log files found in
/var/vcap/sys/log/
to determine the root cause of the process failures. Some of these logs are formatted using steno with timestamps instead of human-formatted dates. You can usesteno-prettify
to make the logs more human-readable. - Use
monit restart all
ormonit restart PROCESS
to start the processes.
Execute the following commands to set up steno-prettify in the cloudcontroller:
export CC_JOB_DIR=/var/vcap/jobs/cloud_controller_ng source $CC_JOB_DIR/bin/ruby_version.sh CC_PACKAGE_DIR=/var/vcap/packages/cloud_controller_ng export BUNDLE_GEMFILE=$CC_PACKAGE_DIR/cloud_controller_ng/Gemfile export HOME=/home/vcap # rake needs it to be set to run tasks if [ -f $BUNDLE_GEMFILE ]; then alias steno-prettify="bundle exec steno-prettify" echo "ready to use steno-prettify alias, try steno-prettify on one of the following files:" find /var/vcap/sys/log/ -name "*.log" | egrep -v "err|out|ctl" | xargs ls -al else echo "could not find Gemfile into ${STENO_DIR}:" `ls -al ${BUNDLE_GEMFILE}` fi
Interactive cloudcontroller shell
The cloud controller jobs embeds a pry shell, see http://pryrepl.org/ for some background. This may allow to interact with ccng classes such as scripting some operations or access the ccdb using model classes, see example
$ cd /var/vcap/jobs/cloud_controller_ng $ bin/console [...]
Forcing a VM Recreate
If BOSH or an operator identifies a VM as corrupted, BOSH can recreate the VM. This recreation function can fail if a drain script on the VM is broken or times out. To resolve this issue:
SSH into the VM and use the
sv stop agent
command to kill the BOSH Agent.$ bosh ssh myVM $ sv stop agent
Let the BOSH Health Monitor automatically restart the VM. To follow the status of this process:
- Run
bosh tasks --no-filter
to determine the task ID of the “scan and fix” task. Run
bosh task TASK-ID
with the task ID of the “scan and fix” task.$ bosh tasks --no-filter +----+---------+--------------------+-------+--------------+--------+ | # | State | Timestamp | User | Description | Result | +----+---------+--------------------+-------+--------------+--------+ | 83 | running | 01-21 12:07:34 UTC | admin | scan and fix | active | +----+---------+--------------------+-------+--------------+--------+ $ bosh task 83 Director task 83 Started scanning 1 vms . . .
Scaling Droplet Execution Agents
Droplet Execution Agents (DEAs) stage and run applications. Follow the steps below to increase the resource pool for DEAs and the number of DEAs available in your deployment.
- Run
bosh download manifest YOUR-DEPLOYMENT-NAME prod-current.yml
to download a copy of your existing deployment manifest. - Run
bosh deployment prod-current.yml
to instruct BOSH to reference the downloaded deployment manifest. - Edit
prod-current.yml
, increasing the resource pool for DEAs and the number of DEAs. - Run
bosh deploy
to update your deployment.
Recovering from HM9000 Failure
The Cloud Foundry Health Manager, HM9000, reconciles the expected states of applications and VMs in a deployment with their actual states. To do this, the HM9000 uses state information kept in etcd, a high-availability data store distributed across multiple nodes.
If etcd enters a bad state and becomes unable to write data, the HM9000 cannot operate correctly. To resolve this issue, delete the contents of the etcd data store. Doing so forces Cloud Foundry to rebuild the etcd data store from current data and allows the HM9000 to recover.
To delete the contents of the etcd data store:
- Run
bosh vms
to identify the etcd nodes in your deployment. - Run
bosh ssh
to open a secure shell into each etcd node. - Run
monit stop etcd
on each etcd node. - Delete or move the etcd storage directory,
/var/vcap/store
, on each etcd node. - Run
monit start etcd
on each etcd node.