CloudStack is an open source software platform that pools computing resources to build public, private, and hybrid Infrastructure as a Service (IaaS) clouds. CloudStack manages the network, storage, and compute nodes that make up a cloud infrastructure. Use CloudStack to deploy, manage, and configure cloud computing environments.
Typical users are service providers and enterprises. With CloudStack, you can:
Set up an on-demand, elastic cloud computing service. Service providers can sell self service virtual machine instances, storage volumes, and networking configurations over the Internet.
Set up an on-premise private cloud for use by employees. Rather than managing virtual machines in the same way as physical machines, with CloudStack an enterprise can offer self-service virtual machines to users without involving IT departments.
1.2. What Can CloudStack Do?
Multiple Hypervisor Support
CloudStack works with a variety of hypervisors, and a single cloud deployment can contain multiple hypervisor implementations. The current release of CloudStack supports pre-packaged enterprise solutions like Citrix XenServer and VMware vSphere, as well as KVM or Xen running on Ubuntu or CentOS.
Massively Scalable Infrastructure Management
CloudStack can manage tens of thousands of servers installed in multiple geographically distributed datacenters. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause a cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud.
Automatic Configuration Management
CloudStack automatically configures each guest virtual machine’s networking and storage settings.
CloudStack internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN access, console proxy, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a cloud deployment.
Graphical User Interface
CloudStack offers an administrator's Web interface, used for provisioning and managing the cloud, as well as an end-user's Web interface, used for running VMs and managing VM templates. The UI can be customized to reflect the desired service provider or enterprise look and feel.
API and Extensibility
CloudStack provides an API that gives programmatic access to all the management features available in the UI. The API is maintained and documented. This API enables the creation of command line tools and new user interfaces to suit particular needs. See the Developer’s Guide and API Reference, both available at
Apache CloudStack Guides and
Apache CloudStack API Reference respectively.
High Availability
CloudStack has a number of features to increase the availability of the system. The Management Server itself may be deployed in a multi-node installation where the servers are load balanced. MySQL may be configured to use replication to provide for a manual failover in the event of database loss. For the hosts, CloudStack supports NIC bonding and the use of separate networks for storage as well as iSCSI Multipath.
1.3. Deployment Architecture Overview
A CloudStack installation consists of two parts: the Management Server and the cloud infrastructure that it manages. When you set up and manage a CloudStack cloud, you provision resources such as hosts, storage devices, and IP addresses into the Management Server, and the Management Server manages those resources.
The minimum production installation consists of one machine running the CloudStack Management Server and another machine to act as the cloud infrastructure (in this case, a very simple infrastructure consisting of one host running hypervisor software). In its smallest deployment, a single machine can act as both the Management Server and the hypervisor host (using the KVM hypervisor).
A more full-featured installation consists of a highly-available multi-node Management Server installation and up to tens of thousands of hosts using any of several advanced networking setups. For information about deployment options, see the "Choosing a Deployment Architecture" section of the CloudStack Installation Guide.
1.3.1. Management Server Overview
The Management Server is the CloudStack software that manages cloud resources. By interacting with the Management Server through its UI or API, you can configure and manage your cloud infrastructure.
The Management Server runs on a dedicated server or VM. It controls allocation of virtual machines to hosts and assigns storage and IP addresses to the virtual machine instances. The Management Server runs in a Tomcat container and requires a MySQL database for persistence.
The machine must meet the system requirements described in System Requirements.
The Management Server:
Provides the web user interface for the administrator and a reference user interface for end users.
Provides the APIs for CloudStack.
Manages the assignment of guest VMs to particular hosts.
Manages the assignment of public and private IP addresses to particular accounts.
Manages the allocation of storage to guests as virtual disks.
Manages snapshots, templates, and ISO images, possibly replicating them across data centers.
Provides a single point of configuration for the cloud.
1.3.2. Cloud Infrastructure Overview
The Management Server manages one or more zones (typically, datacenters) containing host computers where guest virtual machines will run. The cloud infrastructure is organized as follows:
Zone: Typically, a zone is equivalent to a single datacenter. A zone consists of one or more pods and secondary storage.
Pod: A pod is usually one rack of hardware that includes a layer-2 switch and one or more clusters.
Cluster: A cluster consists of one or more hosts and primary storage.
Host: A single compute node within a cluster. The hosts are where the actual cloud services run in the form of guest virtual machines.
Primary storage is associated with a cluster, and it stores the disk volumes for all the VMs running on hosts in that cluster.
Secondary storage is associated with a zone, and it stores templates, ISO images, and disk volume snapshots.
More Information
For more information, see documentation on cloud infrastructure concepts.
1.3.3. Networking Overview
CloudStack offers two types of networking scenario:
Basic. For AWS-style networking. Provides a single network where guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks.
For more details, see Network Setup.
Chapter 2. Cloud Infrastructure Concepts
To increase reliability of the cloud, you can optionally group resources into multiple geographic regions. A region is the largest available organizational unit within a CloudStack deployment. A region is made up of several availability zones, where each zone is roughly equivalent to a datacenter. Each region is controlled by its own cluster of Management Servers, running in one of the zones. The zones in a region are typically located in close geographical proximity. Regions are a useful technique for providing fault tolerance and disaster recovery.
By grouping zones into regions, the cloud can achieve higher availability and scalability. User accounts can span regions, so that users can deploy VMs in multiple, widely-dispersed regions. Even if one of the regions becomes unavailable, the services are still available to the end-user through VMs deployed in another region. And by grouping communities of zones under their own nearby Management Servers, the latency of communications within the cloud is reduced compared to managing widely-dispersed zones from a single central Management Server.
Usage records can also be consolidated and tracked at the region level, creating reports or invoices for each geographic region.
Regions are visible to the end user. When a user starts a guest VM on a particular CloudStack Management Server, the user is implicitly selecting that region for their guest. Users might also be required to copy their private templates to additional regions to enable creation of guest VMs using their templates in those regions.
A zone is the second largest organizational unit within a CloudStack deployment. A zone typically corresponds to a single datacenter, although it is permissible to have multiple zones in a datacenter. The benefit of organizing infrastructure into zones is to provide physical isolation and redundancy. For example, each zone can have its own power supply and network uplink, and the zones can be widely separated geographically (though this is not required).
A zone consists of:
One or more pods. Each pod contains one or more clusters of hosts and one or more primary storage servers.
A zone may contain one or more primary storage servers, which are shared by all the pods in the zone.
Secondary storage, which is shared by all the pods in the zone.
Zones are visible to the end user. When a user starts a guest VM, the user must select a zone for their guest. Users might also be required to copy their private templates to additional zones to enable creation of guest VMs using their templates in those zones.
Zones can be public or private. Public zones are visible to all users. This means that any user may create a guest in that zone. Private zones are reserved for a specific domain. Only users in that domain or its subdomains may create guests in that zone.
Hosts in the same zone are directly accessible to each other without having to go through a firewall. Hosts in different zones can access each other through statically configured VPN tunnels.
For each zone, the administrator must decide the following.
How many pods to place in each zone.
How many clusters to place in each pod.
How many hosts to place in each cluster.
(Optional) How many primary storage servers to place in each zone and total capacity for these storage servers.
How many primary storage servers to place in each cluster and total capacity for these storage servers.
How much secondary storage to deploy in a zone.
When you add a new zone using the CloudStack UI, you will be prompted to configure the zone’s physical network and add the first pod, cluster, host, primary storage, and secondary storage.
In order to support zone-wide functions for VMware, CloudStack is aware of VMware Datacenters and can map each Datacenter to a CloudStack zone. To enable features like storage live migration and zone-wide primary storage for VMware hosts, CloudStack has to make sure that a zone contains only a single VMware Datacenter. Therefore, when you are creating a new CloudStack zone, you can select a VMware Datacenter for the zone. If you are provisioning multiple VMware Datacenters, each one will be set up as a single zone in CloudStack.
If you are upgrading from a previous CloudStack version, and your existing deployment contains a zone with clusters from multiple VMware Datacenters, that zone will not be forcibly migrated to the new model. It will continue to function as before. However, any new zone-wide operations, such as zone-wide primary storage and live storage migration, will not be available in that zone.
A pod often represents a single rack. Hosts in the same pod are in the same subnet. A pod is the second-largest organizational unit within a CloudStack deployment. Pods are contained within zones. Each zone can contain one or more pods. A pod consists of one or more clusters of hosts and one or more primary storage servers. Pods are not visible to the end user.
A cluster provides a way to group hosts. To be precise, a cluster is a XenServer server pool, a set of KVM servers, , or a VMware cluster preconfigured in vCenter. The hosts in a cluster all have identical hardware, run the same hypervisor, are on the same subnet, and access the same shared primary storage. Virtual machine instances (VMs) can be live-migrated from one host to another within the same cluster, without interrupting service to the user.
A cluster is the third-largest organizational unit within a CloudStack deployment. Clusters are contained within pods, and pods are contained within zones. Size of the cluster is limited by the underlying hypervisor, although the CloudStack recommends less in most cases; see Best Practices.
A cluster consists of one or more hosts and one or more primary storage servers.
CloudStack allows multiple clusters in a cloud deployment.
Even when local storage is used exclusively, clusters are still required organizationally, even if there is just one host per cluster.
When VMware is used, every VMware cluster is managed by a vCenter server. An Administrator must register the vCenter server with CloudStack. There may be multiple vCenter servers per zone. Each vCenter server may manage multiple VMware clusters.
A host is a single computer. Hosts provide the computing resources that run the guest virtual machines. Each host has hypervisor software installed on it to manage the guest VMs. For example, a Linux KVM-enabled server, a Citrix XenServer server, and an ESXi server are hosts.
The host is the smallest organizational unit within a CloudStack deployment. Hosts are contained within clusters, clusters are contained within pods, and pods are contained within zones.
Hosts in a CloudStack deployment:
Provide the CPU, memory, storage, and networking resources needed to host the virtual machines
Interconnect using a high bandwidth TCP/IP network and connect to the Internet
May reside in multiple data centers across different geographic locations
May have different capacities (different CPU speeds, different amounts of RAM, etc.), although the hosts within a cluster must all be homogeneous
Additional hosts can be added at any time to provide more capacity for guest VMs.
CloudStack automatically detects the amount of CPU and memory resources provided by the Hosts.
Hosts are not visible to the end user. An end user cannot determine which host their guest has been assigned to.
For a host to function in CloudStack, you must do the following:
Install hypervisor software on the host
Assign an IP address to the host
Ensure the host is connected to the CloudStack Management Server
2.6. About Primary Storage
Primary storage is associated with a cluster and/or a zone. It stores the disk volumes for all of the VMs running on hosts in that cluster. You can add multiple primary storage servers to a cluster or a zone (at least one is required at the cluster level). Primary storage is typically located close to the hosts for increased performance. CloudStack manages the allocation of guest virtual disks to particular primary storage devices.
Primary storage uses the concept of a storage tag. A storage tag is a label that is used to identify the primary storage. Each primary storage can be associated with zero, one, or more storage tags. When a VM is spun up or a data disk attached to a VM for the first time, these tags, if supplied, are used to determine which primary storage can support the VM or data disk (ex. say you need to guarantee a certain number of IOPS to a particular volume).
Primary storage can be either static or dynamic. Static primary storage is what CloudStack has traditionally supported. In this model, the administrator must present CloudStack with a certain amount of preallocated storage (ex. a volume from a SAN) and CloudStack can place many of its volumes on this storage. In the newer, dynamic model, the administrator can present CloudStack with a storage system itself (ex. a SAN). CloudStack, working in concert with a plug-in developed for that storage system, can dynamically create volumes on the storage system. A valuable use for this ability is Quality of Service (QoS). If a volume created in CloudStack can be backed by a dedicated volume on a SAN (i.e. a one-to-one mapping between a SAN volume and a CloudStack volume) and the SAN provides QoS, then CloudStack can provide QoS.
CloudStack is designed to work with all standards-compliant iSCSI and NFS servers that are supported by the underlying hypervisor, including, for example:
If you intend to use only local disk for your installation, you can skip to Add Secondary Storage.
2.7. About Secondary Storage
Secondary storage stores the following:
Templates — OS images that can be used to boot VMs and can include additional configuration information, such as installed applications
ISO images — disc images containing data or bootable media for operating systems
Disk volume snapshots — saved copies of VM data which can be used for data recovery or to create new templates
The items in secondary storage are available to all hosts in the scope of the secondary storage, which may be defined as per zone or per region.
To make items in secondary storage available to all hosts throughout the cloud, you can add object storage in addition to the zone-based NFS Secondary Staging Store. It is not necessary to copy templates and snapshots from one zone to another, as would be required when using zone NFS alone. Everything is available everywhere.
CloudStack provides plugins that enable both OpenStack Object Storage (Swift,
swift.openstack.org) and Amazon Simple Storage Service (S3) object storage. When using one of these storage plugins, you configure Swift or S3 storage for the entire CloudStack, then set up the NFS Secondary Staging Store for each zone. The NFS storage in each zone acts as a staging area through which all templates and other secondary storage data pass before being forwarded to Swift or S3. The backing object storage acts as a cloud-wide resource, making templates and other data available to any zone in the cloud.
2.8. About Physical Networks
Part of adding a zone is setting up the physical network. One or (in an advanced zone) more physical networks can be associated with each zone. The network corresponds to a NIC on the hypervisor host. Each physical network can carry one or more types of network traffic. The choices of traffic type for each network vary depending on whether you are creating a zone with basic networking or advanced networking.
A physical network is the actual network hardware and wiring in a zone. A zone can have multiple physical networks. An administrator can:
Add/Remove/Update physical networks in a zone
Configure VLANs on the physical network
Configure a name so the network can be recognized by hypervisors
Configure the service providers (firewalls, load balancers, etc.) available on a physical network
Configure the IP addresses trunked to a physical network
Specify what type of traffic is carried on the physical network, as well as other properties like network speed
2.8.1. Basic Zone Network Traffic Types
When basic networking is used, there can be only one physical network in the zone. That physical network carries the following traffic types:
Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. Each pod in a basic zone is a broadcast domain, and therefore each pod has a different IP range for the guest network. The administrator must configure the IP range for each pod.
Management. When CloudStack's internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by CloudStack to perform various tasks in the cloud), and any other component that communicates directly with the CloudStack Management Server. You must configure the IP range for the system VMs to use.
We strongly recommend the use of separate NICs for management traffic and guest traffic.
Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudStack UI to acquire these IPs to implement NAT between their guest network and the public network, as described in Acquiring a New IP Address.
Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. CloudStack uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
In a basic network, configuring the physical network is fairly straightforward. In most cases, you only need to configure one guest network to carry traffic that is generated by guest VMs. If you use a NetScaler load balancer and enable its elastic IP and elastic load balancing (EIP and ELB) features, you must also configure a network to carry public traffic. CloudStack takes care of presenting the necessary network configuration steps to you in the UI when you add a new zone.
2.8.2. Basic Zone Guest IP Addresses
When basic networking is used, CloudStack will assign IP addresses in the CIDR of the pod to the guests in that pod. The administrator must add a Direct IP range on the pod for this purpose. These IPs are in the same VLAN as the hosts.
2.8.3. Advanced Zone Network Traffic Types
When advanced networking is used, there can be multiple physical networks in the zone. Each physical network can carry one or more traffic types, and you need to let CloudStack know which type of network traffic you want each network to carry. The traffic types in an advanced zone are:
Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. This network can be isolated or shared. In an isolated guest network, the administrator needs to reserve VLAN ranges to provide isolation for each CloudStack account’s network (potentially a large number of VLANs). In a shared guest network, all guest VMs share a single network.
Management. When CloudStack’s internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by CloudStack to perform various tasks in the cloud), and any other component that communicates directly with the CloudStack Management Server. You must configure the IP range for the system VMs to use.
Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudStack UI to acquire these IPs to implement NAT between their guest network and the public network, as described in “Acquiring a New IP Address” in the Administration Guide.
Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. CloudStack uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
These traffic types can each be on a separate physical network, or they can be combined with certain restrictions. When you use the Add Zone wizard in the UI to create a new zone, you are guided into making only valid choices.
2.8.4. Advanced Zone Guest IP Addresses
When advanced networking is used, the administrator can create additional networks for use by the guests. These networks can span the zone and be available to all accounts, or they can be scoped to a single account, in which case only the named account may create guests that attach to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The administrator may provision thousands of these networks if desired. Additionally, the administrator can reserve a part of the IP address space for non-CloudStack VMs and servers.
2.8.5. Advanced Zone Public IP Addresses
When advanced networking is used, the administrator can create additional networks for use by the guests. These networks can span the zone and be available to all accounts, or they can be scoped to a single account, in which case only the named account may create guests that attach to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The administrator may provision thousands of these networks if desired.
2.8.6. System Reserved IP Addresses
In each zone, you need to configure a range of reserved IP addresses for the management network. This network carries communication between the CloudStack Management Server and various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP.
The reserved IP addresses must be unique across the cloud. You cannot, for example, have a host in one zone which has the same private IP address as a host in another zone.
The hosts in a pod are assigned private IP addresses. These are typically RFC1918 addresses. The Console Proxy and Secondary Storage system VMs are also allocated private IP addresses in the CIDR of the pod that they are created in.
Make sure computing servers and Management Servers use IP addresses outside of the System Reserved IP range. For example, suppose the System Reserved IP range starts at 192.168.154.2 and ends at 192.168.154.7. CloudStack can use .2 to .7 for System VMs. This leaves the rest of the pod CIDR, from .8 to .254, for the Management Server and hypervisor hosts.
In all zones:
Provide private IPs for the system in each pod and provision them in CloudStack.
For KVM and XenServer, the recommended number of private IPs per pod is one per host. If you expect a pod to grow, add enough private IPs now to accommodate the growth.
In a zone that uses advanced networking:
For zones with advanced networking, we recommend provisioning enough private IPs for your total number of customers, plus enough for the required CloudStack System VMs. Typically, about 10 additional IPs are required for the System VMs. For more information about System VMs, see the section on working with SystemVMs in the Administrator's Guide.
When advanced networking is being used, the number of private IP addresses available in each pod varies depending on which hypervisor is running on the nodes in that pod. Citrix XenServer and KVM use link-local addresses, which in theory provide more than 65,000 private IP addresses within the address block. As the pod grows over time, this should be more than enough for any reasonable number of hosts as well as IP addresses for guest virtual routers. VMWare ESXi, by contrast uses any administrator-specified subnetting scheme, and the typical administrator provides only 255 IPs per pod. Since these are shared by physical machines, the guest virtual router, and other entities, it is possible to run out of private IPs when scaling up a pod whose nodes are running ESXi.
To ensure adequate headroom to scale private IP space in an ESXi pod that uses advanced networking, use one or both of the following techniques:
Specify a larger CIDR block for the subnet. A subnet mask with a /20 suffix will provide more than 4,000 IP addresses.
Create multiple pods, each with its own subnet. For example, if you create 10 pods and each pod has 255 IPs, this will provide 2,550 IP addresses.
Chapter 3. Building from Source
The official CloudStack release is always in source code form. You will likely be able to find "convenience binaries," the source is the canonical release. In this section, we'll cover acquiring the source release and building that so that you can deploy it using Maven or create Debian packages or RPMs.
Note that building and deploying directly from source is typically not the most efficient way to deploy an IaaS. However, we will cover that method as well as building RPMs or Debian packages for deploying CloudStack.
The instructions here are likely version-specific. That is, the method for building from source for the 4.0.x series is different from the 4.1.x series.
If you are working with a unreleased version of CloudStack, see the INSTALL.md file in the top-level directory of the release.
Prior releases are available via archive.apache.org as well. See the downloads page for more information on archived releases.
You'll notice several links under the 'Latest release' section. A link to a file ending in tar.bz2
, as well as a PGP/GPG signature, MD5, and SHA512 file.
The tar.bz2
file contains the Bzip2-compressed tarball with the source code.
The .asc
file is a detached cryptographic signature that can be used to help verify the authenticity of the release.
The .md5
file is an MD5 hash of the release to aid in verify the validity of the release download.
The .sha
file is a SHA512 hash of the release to aid in verify the validity of the release download.
3.2. Verifying the downloaded release
There are a number of mechanisms to check the authenticity and validity of a downloaded release.
To enable you to verify the GPG signature, you will need to download the
KEYS file.
You next need to import those keys, which you can do by running:
#
gpg
--import KEYS
The CloudStack project provides a detached GPG signature of the release. To check the signature, run the following command:
$
gpg
--verify apache-cloudstack-4.0.0-incubating-src.tar.bz2.asc
If the signature is valid you will see a line of output that contains 'Good signature'.
In addition to the cryptographic signature, CloudStack has an MD5 checksum that you can use to verify the download matches the release. You can verify this hash by executing the following command:
$
gpg
--print-md MD5 apache-cloudstack-4.0.0-incubating-src.tar.bz2 | diff
- apache-cloudstack-4.0.0-incubating-src.tar.bz2.md5
If this successfully completes you should see no output. If there is any output from them, then there is a difference between the hash you generated locally and the hash that has been pulled from the server.
In addition to the MD5 hash, the CloudStack project provides a SHA512 cryptographic hash to aid in assurance of the validity of the downloaded release. You can verify this hash by executing the following command:
$
gpg
--print-md SHA512 apache-cloudstack-4.0.0-incubating-src.tar.bz2 | diff
- apache-cloudstack-4.0.0-incubating-src.tar.bz2.sha
If this command successfully completes you should see no output. If there is any output from them, then there is a difference between the hash you generated locally and the hash that has been pulled from the server.
3.3. Prerequisites for building Apache CloudStack
There are a number of prerequisites needed to build CloudStack. This document assumes compilation on a Linux system that uses RPMs or DEBs for package management.
You will need, at a minimum, the following to compile CloudStack:
Maven (version 3)
Java (OpenJDK 1.6 or Java 7/OpenJDK 1.7)
Apache Web Services Common Utilities (ws-commons-util)
MySQL
MySQLdb (provides Python database API)
Tomcat 6 (not 6.0.35)
genisoimage
rpmbuild or dpkg-dev
3.5. Building DEB packages
In addition to the bootstrap dependencies, you'll also need to install several other dependencies. Note that we recommend using Maven 3, which is not currently available in 12.04.1 LTS. So, you'll also need to add a PPA repository that includes Maven 3. After running the command add-apt-repository
, you will be prompted to continue and a GPG key will be added.
$ sudo apt-get update
$ sudo apt-get install python-software-properties
$ sudo add-apt-repository ppa:natecarlson/maven3
$ sudo apt-get update
$ sudo apt-get install ant debhelper openjdk-6-jdk tomcat6 libws-commons-util-java genisoimage python-mysqldb libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java maven3
While we have defined, and you have presumably already installed the bootstrap prerequisites, there are a number of build time prerequisites that need to be resolved. CloudStack uses maven for dependency resolution. You can resolve the buildtime depdencies for CloudStack by running:
$ mvn3 -P deps
Now that we have resolved the dependencies we can move on to building CloudStack and packaging them into DEBs by issuing the following command.
$ dpkg-buildpackage -uc -us
This command will build the following debian packages. You should have all of the following:
cloudstack-common-4.2.0.amd64.deb
cloudstack-management-4.2.0.amd64.deb
cloudstack-agent-4.2.0.amd64.deb
cloudstack-usage-4.2.0.amd64.deb
cloudstack-awsapi-4.2.0.amd64.deb
cloudstack-cli-4.2.0.amd64.deb
cloudstack-docs-4.2.0.amd64.deb
3.5.1. Setting up an APT repo
After you've created the packages, you'll want to copy them to a system where you can serve the packages over HTTP. You'll create a directory for the packages and then use dpkg-scanpackages
to create Packages.gz
, which holds information about the archive structure. Finally, you'll add the repository to your system(s) so you can install the packages using APT.
The first step is to make sure that you have the dpkg-dev package installed. This should have been installed when you pulled in the debhelper application previously, but if you're generating Packages.gz
on a different system, be sure that it's installed there as well.
$ sudo apt-get install dpkg-dev
The next step is to copy the DEBs to the directory where they can be served over HTTP. We'll use /var/www/cloudstack/repo
in the examples, but change the directory to whatever works for you.
sudo mkdir -p /var/www/cloudstack/repo/binary
sudo cp *.deb /var/www/cloudstack/repo/binary
sudo cd /var/www/cloudstack/repo/binary
sudo dpkg-scanpackages . /dev/null | tee Packages | gzip -9 > Packages.gz
You can safely ignore the warning about a missing override file.
Now you should have all of the DEB packages and Packages.gz
in the binary
directory and available over HTTP. (You may want to use wget
or curl
to test this before moving on to the next step.)
3.5.2. Configuring your machines to use the APT repository
Now that we have created the repository, you need to configure your machine to make use of the APT repository. You can do this by adding a repository file under /etc/apt/sources.list.d
. Use your preferred editor to create /etc/apt/sources.list.d/cloudstack.list
with this line:
deb http://server.url
/cloudstack/repo binary ./
Now that you have the repository info in place, you'll want to run another update so that APT knows where to find the CloudStack packages.
$ sudo apt-get update
You can now move on to the instructions under Install on Ubuntu.
3.6. Building RPMs from Source
# yum groupinstall "Development Tools"
# yum install java-1.6.0-openjdk-devel.x86_64 genisoimage mysql mysql-server ws-commons-util MySQL-python tomcat6 createrepo
Next, you'll need to install build-time dependencies for CloudStack with Maven. We're using Maven 3, so you'll want to
grab a Maven 3 tarball and uncompress it in your home directory (or whatever location you prefer):
$ tar zxvf apache-maven-3.0.4-bin.tar.gz
$ export PATH=/usr/local/apache-maven-3.0.4//bin:$PATH
Maven also needs to know where Java is, and expects the JAVA_HOME environment variable to be set:
$ export JAVA_HOME=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/
Verify that Maven is installed correctly:
You probably want to ensure that your environment variables will survive a logout/reboot. Be sure to update ~/.bashrc
with the PATH and JAVA_HOME variables.
Building RPMs for CloudStack is fairly simple. Assuming you already have the source downloaded and have uncompressed the tarball into a local directory, you're going to be able to generate packages in just a few minutes.
If you've created packages for CloudStack previously, you should be aware that the process has changed considerably since the project has moved to using Apache Maven. Please be sure to follow the steps in this section closely.
Now that we have the prerequisites and source, you will cd to the packaging/centos63/
directory.
$
cd packaging/centos63
Generating RPMs is done using the
package.sh
script:
$
./package.sh
That will run for a bit and then place the finished packages in dist/rpmbuild/RPMS/x86_64/
.
You should see the following RPMs in that directory:
cloudstack-agent-4.2.0.el6.x86_64.rpm
cloudstack-awsapi-4.2.0.el6.x86_64.rpm
cloudstack-cli-4.2.0.el6.x86_64.rpm
cloudstack-common-4.2.0.el6.x86_64.rpm
cloudstack-docs-4.2.0.el6.x86_64.rpm
cloudstack-management-4.2.0.el6.x86_64.rpm
cloudstack-usage-4.2.0.el6.x86_64.rpm
3.6.1.1. Creating a yum repo
While RPMs is a useful packaging format - it's most easily consumed from Yum repositories over a network. The next step is to create a Yum Repo with the finished packages:
$
mkdir -p ~/tmp/repo
$
cp dist/rpmbuild/RPMS/x86_64/*rpm ~/tmp/repo/
$
createrepo ~/tmp/repo
The files and directories within ~/tmp/repo
can now be uploaded to a web server and serve as a yum repository.
3.6.1.2. Configuring your systems to use your new yum repository
Now that your yum repository is populated with RPMs and metadata we need to configure the machines that need to install CloudStack. Create a file named
/etc/yum.repos.d/cloudstack.repo
with this information:
[apache-cloudstack]
name=Apache CloudStack
baseurl=http://webserver.tld/path/to/repo
enabled=1
gpgcheck=0
Completing this step will allow you to easily install CloudStack on a number of machines across the network.
If you need support for the VMware, NetApp, F5, NetScaler, SRX, or any other non-Open Source Software (nonoss) plugins, you'll need to download a few components on your own and follow a slightly different procedure to build from source.
Some of the plugins supported by CloudStack cannot be distributed with CloudStack for licensing reasons. In some cases, some of the required libraries/JARs are under a proprietary license. In other cases, the required libraries may be under a license that's not compatible with
Apache's licensing guidelines for third-party products.
To build the Non-OSS plugins, you'll need to have the requisite JARs installed under the deps
directory.
Because these modules require dependencies that can't be distributed with CloudStack you'll need to download them yourself. Links to the most recent dependencies are listed on the
How to build on master branch page on the wiki.
You may also need to download
vhd-util, which was removed due to licensing issues. You'll copy vhd-util to the
scripts/vm/hypervisor/xenserver/
directory.
Once you have all the dependencies copied over, you'll be able to build CloudStack with the nonoss
option:
$ mvn clean
$ mvn install -Dnonoss
Chapter 13. Network Setup
Achieving the correct networking setup is crucial to a successful CloudStack installation. This section contains information to help you make decisions and follow the right procedures to get your network set up correctly.
13.1. Basic and Advanced Networking
CloudStack provides two styles of networking:.
For AWS-style networking. Provides a single network where guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks, but requires more configuration steps than basic networking.
Each zone has either basic or advanced networking. Once the choice of networking model for a zone has been made and configured in CloudStack, it can not be changed. A zone is either basic or advanced for its entire lifetime.
The following table compares the networking features in the two networking models.
The two types of networking may be in use in the same cloud. However, a given zone must use either Basic Networking or Advanced Networking.
Different types of network traffic can be segmented on the same physical network. Guest traffic can also be segmented by account. To isolate traffic, you can use separate VLANs. If you are using separate VLANs on a single physical network, make sure the VLAN tags are in separate numerical ranges.
13.2. VLAN Allocation Example
VLANs are required for public and guest traffic. The following is an example of a VLAN allocation scheme:
13.3. Example Hardware Configuration
This section contains an example configuration of specific switch models for zone-level layer-3 switching. It assumes VLAN management protocols, such as VTP or GVRP, have been disabled. The example scripts must be changed appropriately if you choose to use VTP or GVRP.
The following steps show how a Dell 62xx is configured for zone-level layer-3 switching. These steps assume VLAN 201 is used to route untagged private IPs for pod 1, and pod 1’s layer-2 switch is connected to Ethernet port 1/g1.
The Dell 62xx Series switch supports up to 1024 VLANs.
Configure all the VLANs in the database.
vlan database
vlan 200-999
exit
Configure Ethernet port 1/g1.
interface ethernet 1/g1
switchport mode general
switchport general pvid 201
switchport general allowed vlan add 201 untagged
switchport general allowed vlan add 300-999 tagged
exit
The statements configure Ethernet port 1/g1 as follows:
The following steps show how a Cisco 3750 is configured for zone-level layer-3 switching. These steps assume VLAN 201 is used to route untagged private IPs for pod 1, and pod 1’s layer-2 switch is connected to GigabitEthernet1/0/1.
Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only use VLANs up to 999, vtp transparent mode is not strictly required.
vtp mode transparent
vlan 200-999
exit
Configure GigabitEthernet1/0/1.
interface GigabitEthernet1/0/1
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 201
exit
The statements configure GigabitEthernet1/0/1 as follows:
VLAN 201 is the native untagged VLAN for port GigabitEthernet1/0/1.
Cisco passes all VLANs by default. As a result, all VLANs (300-999) are passed to all the pod-level layer-2 switches.
The layer-2 switch is the access switching layer inside the pod.
It should trunk all VLANs into every computing host.
It should switch traffic for the management network containing computing and storage hosts. The layer-3 switch will serve as the gateway for the management network.
This section contains example configurations for specific switch models for pod-level layer-2 switching. It assumes VLAN management protocols such as VTP or GVRP have been disabled. The scripts must be changed appropriately if you choose to use VTP or GVRP.
The following steps show how a Dell 62xx is configured for pod-level layer-2 switching.
Configure all the VLANs in the database.
vlan database
vlan 300-999
exit
VLAN 201 is used to route untagged private IP addresses for pod 1, and pod 1 is connected to this layer-2 switch.
interface range ethernet all
switchport mode general
switchport general allowed vlan add 300-999 tagged
exit
The statements configure all Ethernet ports to function as follows:
The following steps show how a Cisco 3750 is configured for pod-level layer-2 switching.
Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only use VLANs up to 999, vtp transparent mode is not strictly required.
vtp mode transparent
vlan 300-999
exit
Configure all ports to dot1q and set 201 as the native VLAN.
interface range GigabitEthernet 1/0/1-24
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 201
exit
By default, Cisco passes all VLANs. Cisco switches complain of the native VLAN IDs are different when 2 ports are connected together. That’s why you must specify VLAN 201 as the native VLAN on the layer-2 switch.
13.5.1. Generic Firewall Provisions
The hardware firewall is required to serve two purposes:
Protect the Management Servers. NAT and port forwarding should be configured to direct traffic from the public Internet to the Management Servers.
Route management network traffic between multiple zones. Site-to-site VPN should be configured between multiple zones.
To achieve the above purposes you must set up fixed configurations for the firewall. Firewall rules and policies need not change as users are provisioned into the cloud. Any brand of hardware firewall that supports NAT and site-to-site VPN can be used.
13.5.2. External Guest Firewall Integration for Juniper SRX (Optional)
Available only for guests using advanced networking.
CloudStack provides for direct management of the Juniper SRX series of firewalls. This enables CloudStack to establish static NAT mappings from public IPs to guest VMs, and to use the Juniper device in place of the virtual router for firewall services. You can have one or more Juniper SRX per zone. This feature is optional. If Juniper integration is not provisioned, CloudStack will use the virtual router for these services.
The Juniper SRX can optionally be used in conjunction with an external load balancer. External Network elements can be deployed in a side-by-side or inline configuration.
CloudStack requires the Juniper to be configured as follows:
Supported SRX software version is 10.3 or higher.
Install your SRX appliance according to the vendor's instructions.
Connect one interface to the management network and one interface to the public network. Alternatively, you can connect the same interface to both networks and a use a VLAN for the public network.
Make sure "vlan-tagging" is enabled on the private interface.
Record the public and private interface names. If you used a VLAN for the public interface, add a ".[VLAN TAG]" after the interface name. For example, if you are using ge-0/0/3 for your public interface and VLAN tag 301, your public interface name would be "ge-0/0/3.301". Your private interface name should always be untagged because the CloudStack software automatically creates tagged logical interfaces.
Create a public security zone and a private security zone. By default, these will already exist and will be called "untrust" and "trust". Add the public interface to the public zone and the private interface to the private zone. Note down the security zone names.
Make sure there is a security policy from the private zone to the public zone that allows all traffic.
Note the username and password of the account you want the CloudStack software to log in to when it is programming rules.
Make sure the "ssh" and "xnm-clear-text" system services are enabled.
If traffic metering is desired:
a. Create an incoming firewall filter and an outgoing firewall filter. These filters should be the same names as your public security zone name and private security zone name respectively. The filters should be set to be "interface-specific". For example, here is the configuration where the public zone is "untrust" and the private zone is "trust":
root@cloud-srx# show firewall
filter trust {
interface-specific;
}
filter untrust {
interface-specific;
}
Add the firewall filters to your public interface. For example, a sample configuration output (for public interface ge-0/0/3.0, public security zone untrust, and private security zone trust) is:
ge-0/0/3 {
unit 0 {
family inet {
filter {
input untrust;
output trust;
}
address 172.25.0.252/16;
}
}
}
Make sure all VLANs are brought to the private interface of the SRX.
After the CloudStack Management Server is installed, log in to the CloudStack UI as administrator.
In the left navigation bar, click Infrastructure.
In Zones, click View More.
Choose the zone you want to work with.
Click the Network tab.
In the Network Service Providers node of the diagram, click Configure. (You might have to scroll down to see this.)
Click SRX.
Click the Add New SRX button (+) and provide the following:
IP Address: The IP address of the SRX.
Username: The user name of the account on the SRX that CloudStack should use.
Password: The password of the account.
Public Interface. The name of the public interface on the SRX. For example, ge-0/0/2. A ".x" at the end of the interface indicates the VLAN that is in use.
Private Interface: The name of the private interface on the SRX. For example, ge-0/0/1.
Usage Interface: (Optional) Typically, the public interface is used to meter traffic. If you want to use a different interface, specify its name here
Number of Retries: The number of times to attempt a command on the SRX before failing. The default value is 2.
Timeout (seconds): The time to wait for a command on the SRX before considering it failed. Default is 300 seconds.
Public Network: The name of the public network on the SRX. For example, trust.
Private Network: The name of the private network on the SRX. For example, untrust.
Capacity: The number of networks the device can handle
Dedicated: When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1
Click OK.
Click Global Settings. Set the parameter external.network.stats.interval to indicate how often you want CloudStack to fetch network usage statistics from the Juniper SRX. If you are not using the SRX to gather network usage statistics, set to 0.
13.5.3. External Guest Firewall Integration for Cisco VNMC (Optional)
Cisco Virtual Network Management Center (VNMC) provides centralized multi-device and policy management for Cisco Network Virtual Services. You can integrate Cisco VNMC with CloudStack to leverage the firewall and NAT service offered by ASA 1000v Cloud Firewall. Use it in a Cisco Nexus 1000v dvSwitch-enabled cluster in CloudStack. In such a deployment, you will be able to:
Configure Cisco ASA 1000v firewalls. You can configure one per guest network.
Use Cisco ASA 1000v firewalls to create and apply security profiles that contain ACL policy sets for both ingress and egress traffic.
Use Cisco ASA 1000v firewalls to create and apply Source NAT, Port Forwarding, and Static NAT policy sets.
CloudStack supports Cisco VNMC on Cisco Nexus 1000v dvSwich-enabled VMware hypervisors.
13.5.3.1. Using Cisco ASA 1000v Firewall, Cisco Nexus 1000v dvSwitch, and Cisco VNMC in a Deployment
Cisco ASA 1000v firewall is supported only in Isolated Guest Networks.
Cisco ASA 1000v firewall is not supported on VPC.
Cisco ASA 1000v firewall is not supported for load balancing.
When a guest network is created with Cisco VNMC firewall provider, an additional public IP is acquired along with the Source NAT IP. The Source NAT IP is used for the rules, whereas the additional IP is used to for the ASA outside interface. Ensure that this additional public IP is not released. You can identify this IP as soon as the network is in implemented state and before acquiring any further public IPs. The additional IP is the one that is not marked as Source NAT. You can find the IP used for the ASA outside interface by looking at the Cisco VNMC used in your guest network.
Use the public IP address range from a single subnet. You cannot add IP addresses from different subnets.
Only one ASA instance per VLAN is allowed because multiple VLANS cannot be trunked to ASA ports. Therefore, you can use only one ASA instance in a guest network.
Only one Cisco VNMC per zone is allowed.
Supported only in Inline mode deployment with load balancer.
The ASA firewall rule is applicable to all the public IPs in the guest network. Unlike the firewall rules created on virtual router, a rule created on the ASA device is not tied to a specific public IP.
Use a version of Cisco Nexus 1000v dvSwitch that support the vservice command. For example: nexus-1000v.4.2.1.SV1.5.2b.bin
Cisco VNMC requires the vservice command to be available on the Nexus switch to create a guest network in CloudStack.
13.5.3.1.2. Prerequisites
Configure Cisco Nexus 1000v dvSwitch in a vCenter environment.
Create Port profiles for both internal and external network interfaces on Cisco Nexus 1000v dvSwitch. Note down the inside port profile, which needs to be provided while adding the ASA appliance to CloudStack.
Deploy and configure Cisco VNMC.
Register Cisco Nexus 1000v dvSwitch with Cisco VNMC.
Create Inside and Outside port profiles in Cisco Nexus 1000v dvSwitch.
Deploy and Cisco ASA 1000v appliance.
Typically, you create a pool of ASA 1000v appliances and register them with CloudStack.
Specify the following while setting up a Cisco ASA 1000v instance:
VNMC host IP.
Ensure that you add ASA appliance in VNMC mode.
Port profiles for the Management and HA network interfaces. This need to be pre-created on Cisco Nexus 1000v dvSwitch.
Internal and external port profiles.
The Management IP for Cisco ASA 1000v appliance. Specify the gateway such that the VNMC IP is reachable.
Administrator credentials
VNMC credentials
Register Cisco ASA 1000v with VNMC.
After Cisco ASA 1000v instance is powered on, register VNMC from the ASA console.
13.5.3.1.3. Using Cisco ASA 1000v Services
Ensure that all the prerequisites are met.
Add a VNMC instance.
Add a ASA 1000v instance.
Create a Network Offering and use Cisco VNMC as the service provider for desired services.
Create an Isolated Guest Network by using the network offering you just created.
13.5.3.2. Adding a VNMC Instance
Log in to the CloudStack UI as administrator.
In the left navigation bar, click Infrastructure.
In Zones, click View More.
Choose the zone you want to work with.
Click the Physical Network tab.
In the Network Service Providers node of the diagram, click Configure.
You might have to scroll down to see this.
Click Cisco VNMC.
Click View VNMC Devices.
Click the Add VNMC Device and provide the following:
Host: The IP address of the VNMC instance.
Username: The user name of the account on the VNMC instance that CloudStack should use.
Password: The password of the account.
Click OK.
13.5.3.3. Adding an ASA 1000v Instance
Log in to the CloudStack UI as administrator.
In the left navigation bar, click Infrastructure.
In Zones, click View More.
Choose the zone you want to work with.
Click the Physical Network tab.
In the Network Service Providers node of the diagram, click Configure.
You might have to scroll down to see this.
Click Cisco VNMC.
Click View ASA 1000v.
Click the Add CiscoASA1000v Resource and provide the following:
Host: The management IP address of the ASA 1000v instance. The IP address is used to connect to ASA 1000V.
Inside Port Profile: The Inside Port Profile configured on Cisco Nexus1000v dvSwitch.
Cluster: The VMware cluster to which you are adding the ASA 1000v instance.
Ensure that the cluster is Cisco Nexus 1000v dvSwitch enabled.
Click OK.
13.5.3.4. Creating a Network Offering Using Cisco ASA 1000v
To have Cisco ASA 1000v support for a guest network, create a network offering as follows:
Log in to the CloudStack UI as a user or admin.
From the Select Offering drop-down, choose Network Offering.
Click Add Network Offering.
In the dialog, make the following choices:
Name: Any desired name for the network offering.
Description: A short description of the offering that can be displayed to users.
Network Rate: Allowed data transfer rate in MB per second.
Traffic Type: The type of network traffic that will be carried on the network.
Guest Type: Choose whether the guest network is isolated or shared.
Persistent: Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network.
VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see
Section 15.27.1, “About Virtual Private Clouds”.
Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used.
Supported Services: Use Cisco VNMC as the service provider for Firewall, Source NAT, Port Forwarding, and Static NAT to create an Isolated guest network offering.
System Offering: Choose the system service offering that you want virtual routers to use in this network.
Conserve mode: Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network.
Click OK
The network offering is created.
13.5.3.5. Reusing ASA 1000v Appliance in new Guest Networks
You can reuse an ASA 1000v appliance in a new guest network after the necessary cleanup. Typically, ASA 1000v is cleaned up when the logical edge firewall is cleaned up in VNMC. If this cleanup does not happen, you need to reset the appliance to its factory settings for use in new guest networks. As part of this, enable SSH on the appliance and store the SSH credentials by registering on VNMC.
Open a command line on the ASA appliance:
Run the following:
ASA1000V(config)# reload
You are prompted with the following message:
System config has been modified. Save? [Y]es/[N]o:"
Enter N.
You will get the following confirmation message:
"Proceed with reload? [confirm]"
Restart the appliance.
Register the ASA 1000v appliance with the VNMC:
ASA1000V(config)# vnmc policy-agent
ASA1000V(config-vnmc-policy-agent)# registration host vnmc_ip_address
ASA1000V(config-vnmc-policy-agent)# shared-secret key where key is the shared secret for authentication of the ASA 1000V connection to the Cisco VNMC
13.5.4. External Guest Load Balancer Integration (Optional)
CloudStack can optionally use a Citrix NetScaler or BigIP F5 load balancer to provide load balancing services to guests. If this is not enabled, CloudStack will use the software load balancer in the virtual router.
To install and enable an external load balancer for CloudStack management:
Set up the appliance according to the vendor's directions.
Connect it to the networks carrying public traffic and management traffic (these could be the same network).
Record the IP address, username, password, public interface name, and private interface name. The interface names will be something like "1.1" or "1.2".
Make sure that the VLANs are trunked to the management network interface.
After the CloudStack Management Server is installed, log in as administrator to the CloudStack UI.
In the left navigation bar, click Infrastructure.
In Zones, click View More.
Choose the zone you want to work with.
Click the Network tab.
In the Network Service Providers node of the diagram, click Configure. (You might have to scroll down to see this.)
Click NetScaler or F5.
Click the Add button (+) and provide the following:
For NetScaler:
IP Address: The IP address of the SRX.
Username/Password: The authentication credentials to access the device. CloudStack uses these credentials to access the device.
Type: The type of device that is being added. It could be F5 Big Ip Load Balancer, NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the NetScaler types, see the CloudStack Administration Guide.
Public interface: Interface of device that is configured to be part of the public network.
Private interface: Interface of device that is configured to be part of the private network.
Number of retries. Number of times to attempt a command on the device before considering the operation failed. Default is 2.
Capacity: The number of networks the device can handle.
Dedicated: When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1.
Click OK.
The installation and provisioning of the external load balancer is finished. You can proceed to add VMs and NAT or load balancing rules.
13.6. Management Server Load Balancing
CloudStack can use a load balancer to provide a virtual IP for multiple Management Servers. The administrator is responsible for creating the load balancer rules for the Management Servers. The application requires persistence or stickiness across multiple sessions. The following chart lists the ports that should be load balanced and whether or not persistence is required.
Even if persistence is not required, enabling it is permitted.
In addition to above settings, the administrator is responsible for setting the 'host' global config value from the management server IP to load balancer virtual IP address. If the 'host' value is not set to the VIP for Port 8250 and one of your management servers crashes, the UI is still available but the system VMs will not be able to contact the management server.
13.7. Topology Requirements
13.7.1. Security Requirements
The public Internet must not be able to access port 8096 or port 8250 on the Management Server.
13.7.2. Runtime Internal Communications Requirements
The Management Servers communicate with each other to coordinate tasks. This communication uses TCP on ports 8250 and 9090.
The console proxy VMs connect to all hosts in the zone over the management traffic network. Therefore the management traffic network of any given pod in the zone must have connectivity to the management traffic network of all other pods in the zone.
The secondary storage VMs and console proxy VMs connect to the Management Server on port 8250. If you are using multiple Management Servers, the load balanced IP address of the Management Servers on port 8250 must be reachable.
13.7.3. Storage Network Topology Requirements
The secondary storage NFS export is mounted by the secondary storage VM. Secondary storage traffic goes over the management traffic network, even if there is a separate storage network. Primary storage traffic goes over the storage network, if available. If you choose to place secondary storage NFS servers on the storage network, you must make sure there is a route from the management traffic network to the storage network.
13.7.4. External Firewall Topology Requirements
When external firewall integration is in place, the public IP VLAN must still be trunked to the Hosts. This is required to support the Secondary Storage VM and Console Proxy VM.
13.7.5. Advanced Zone Topology Requirements
With Advanced Networking, separate subnets must be used for private and public networks.
13.7.6. XenServer Topology Requirements
The Management Servers communicate with XenServer hosts on ports 22 (ssh), 80 (HTTP), and 443 (HTTPs).
13.7.7. VMware Topology Requirements
The Management Server and secondary storage VMs must be able to access vCenter and all ESXi hosts in the zone. To allow the necessary access through the firewall, keep port 443 open.
The Management Servers communicate with VMware vCenter servers on port 443 (HTTPs).
The Management Servers communicate with the System VMs on port 3922 (ssh) on the management traffic network.
13.7.8. KVM Topology Requirements
The Management Servers communicate with KVM hosts on port 22 (ssh).
13.7.9. LXC Topology Requirements
The Management Servers communicate with LXC hosts on port 22 (ssh).
13.8. Guest Network Usage Integration for Traffic Sentinel
To collect usage data for a guest network, CloudStack needs to pull the data from an external network statistics collector installed on the network. Metering statistics for guest networks are available through CloudStack’s integration with inMon Traffic Sentinel.
Traffic Sentinel is a network traffic usage data collection package. CloudStack can feed statistics from Traffic Sentinel into its own usage records, providing a basis for billing users of cloud infrastructure. Traffic Sentinel uses the traffic monitoring protocol sFlow. Routers and switches generate sFlow records and provide them for collection by Traffic Sentinel, then CloudStack queries the Traffic Sentinel database to obtain this information
To construct the query, CloudStack determines what guest IPs were in use during the current query interval. This includes both newly assigned IPs and IPs that were assigned in a previous time period and continued to be in use. CloudStack queries Traffic Sentinel for network statistics that apply to these IPs during the time period they remained allocated in CloudStack. The returned data is correlated with the customer account that owned each IP and the timestamps when IPs were assigned and released in order to create billable metering records in CloudStack. When the Usage Server runs, it collects this data.
To set up the integration between CloudStack and Traffic Sentinel:
On your network infrastructure, install Traffic Sentinel and configure it to gather traffic data. For installation and configuration steps, see inMon documentation at
Traffic Sentinel Documentation.
In the Traffic Sentinel UI, configure Traffic Sentinel to accept script querying from guest users. CloudStack will be the guest user performing the remote queries to gather network usage for one or more IP addresses.
Click File > Users > Access Control > Reports Query, then select Guest from the drop-down list.
On CloudStack, add the Traffic Sentinel host by calling the CloudStack API command addTrafficMonitor. Pass in the URL of the Traffic Sentinel as protocol + host + port (optional); for example, http://10.147.28.100:8080. For the addTrafficMonitor command syntax, see the API Reference at
API Documentation.
Log in to the CloudStack UI as administrator.
Select Configuration from the Global Settings page, and set the following:
direct.network.stats.interval: How often you want CloudStack to query Traffic Sentinel.
13.9. Setting Zone VLAN and Running VM Maximums
In the external networking case, every VM in a zone must have a unique guest IP address. There are two variables that you need to consider in determining how to configure CloudStack to support this: how many Zone VLANs do you expect to have and how many VMs do you expect to have running in the Zone at any one time.
Use the following table to determine how to configure CloudStack for your deployment.
Based on your deployment's needs, choose the appropriate value of guest.vlan.bits. Set it as described in Edit the Global Configuration Settings (Optional) section and restart the Management Server.
Chapter 15. Managing Networks and Traffic
In a CloudStack, guest VMs can communicate with each other using shared infrastructure with the security and user perception that the guests have a private LAN. The CloudStack virtual router is the main component providing networking features for guest traffic.
A network can carry guest traffic only between VMs within one zone. Virtual machines in different zones cannot communicate with each other using their IP addresses; they must communicate with each other by routing through a public IP address.
See a typical guest traffic setup given below:
Typically, the Management Server automatically creates a virtual router for each network. A virtual router is a special virtual machine that runs on the hosts. Each virtual router in an isolated network has three network interfaces. If multiple public VLAN is used, the router will have multiple public interfaces. Its eth0 interface serves as the gateway for the guest traffic and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the virtual router. Its eth2 interface is assigned a public IP address for public traffic. If multiple public VLAN is used, the router will have multiple public interfaces.
The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses.
Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs
15.2. Networking in a Pod
The figure below illustrates network setup within a single pod. The hosts are connected to a pod-level switch. At a minimum, the hosts should have one physical uplink to each switch. Bonded NICs are supported as well. The pod-level switch is a pair of redundant gigabit switches with 10 G uplinks.
Servers are connected as follows:
Storage devices are connected to only the network that carries management traffic.
Hosts are connected to networks for both management traffic and public traffic.
Hosts are also connected to one or more networks carrying guest traffic.
We recommend the use of multiple physical Ethernet cards to implement each network interface as well as redundant switch fabric in order to maximize throughput and improve reliability.
15.3. Networking in a Zone
The following figure illustrates the network setup within a single zone.
A firewall for management traffic operates in the NAT mode. The network typically is assigned IP addresses in the 192.168.0.0/16 Class B private address space. Each pod is assigned IP addresses in the 192.168.*.0/24 Class C private address space.
Each zone has its own set of public IP addresses. Public IP addresses from different zones do not overlap.
15.4. Basic Zone Physical Network Configuration
In a basic network, configuring the physical network is fairly straightforward. You only need to configure one guest network to carry traffic that is generated by guest VMs. When you first add a zone to CloudStack, you set up the guest network through the Add Zone screens.
15.5. Advanced Zone Physical Network Configuration
Within a zone that uses advanced networking, you need to tell the Management Server how the physical network is set up to carry different kinds of traffic in isolation.
15.5.3. Configuring a Shared Guest Network
Log in to the CloudStack UI as administrator.
In the left navigation, choose Infrastructure.
On Zones, click View More.
Click the zone to which you want to add a guest network.
Click the Physical Network tab.
Click the physical network you want to work with.
On the Guest node of the diagram, click Configure.
Click the Network tab.
Click Add guest network.
The Add guest network window is displayed.
Specify the following:
Name: The name of the network. This will be visible to the user.
Description: The short description of the network that can be displayed to users.
VLAN ID: The unique ID of the VLAN.
Isolated VLAN ID: The unique ID of the Secondary Isolated VLAN.
Scope: The available scopes are Domain, Account, Project, and All.
Domain: Selecting Domain limits the scope of this guest network to the domain you specify. The network will not be available for other domains. If you select Subdomain Access, the guest network is available to all the sub domains within the selected domain.
Account: The account for which the guest network is being created for. You must specify the domain the account belongs to.
Project: The project for which the guest network is being created for. You must specify the domain the project belongs to.
All: The guest network is available for all the domains, account, projects within the selected zone.
Network Offering: If the administrator has configured multiple network offerings, select the one you want to use for this network.
Gateway: The gateway that the guests should use.
Netmask: The netmask in use on the subnet the guests will use.
IP Range: A range of IP addresses that are accessible from the Internet and are assigned to the guest VMs.
If one NIC is used, these IPs should be in the same CIDR in the case of IPv6.
IPv6 CIDR: The network prefix that defines the guest network subnet. This is the CIDR that describes the IPv6 addresses in use in the guest networks in this zone. To allot IP addresses from within a particular address block, enter a CIDR.
Network Domain: A custom DNS suffix at the level of a network. If you want to assign a special domain name to the guest VM network, specify a DNS suffix.
Click OK to confirm.
15.6. Using Multiple Guest Networks
In zones that use advanced networking, additional networks for guest traffic may be added at any time after the initial installation. You can also customize the domain name associated with the network by specifying a DNS suffix for each network.
A VM's networks are defined at VM creation time. A VM cannot add or remove networks after it has been created, although the user can go into the guest and remove the IP address from the NIC on a particular network.
Each VM has just one default network. The virtual router's DHCP reply will set the guest's default gateway as that for the default network. Multiple non-default networks may be added to a guest in addition to the single, required default network. The administrator can control which networks are available as the default network.
Additional networks can either be available to all accounts or be assigned to a specific account. Networks that are available to all accounts are zone-wide. Any user with access to the zone can create a VM with access to that network. These zone-wide networks provide little or no isolation between guests.Networks that are assigned to a specific account provide strong isolation.
15.6.1. Adding an Additional Guest Network
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click Add guest network. Provide the following information:
Name: The name of the network. This will be user-visible.
Display Text: The description of the network. This will be user-visible.
Zone. The name of the zone this network applies to. Each zone is a broadcast domain, and therefore each zone has a different IP range for the guest network. The administrator must configure the IP range for each zone.
Network offering: If the administrator has configured multiple network offerings, select the one you want to use for this network.
Guest Gateway: The gateway that the guests should use.
Guest Netmask: The netmask in use on the subnet the guests will use.
Click Create.
15.6.2. Reconfiguring Networks in VMs
CloudStack provides you the ability to move VMs between networks and reconfigure a VM's network. You can remove a VM from a network and add to a new network. You can also change the default network of a virtual machine. With this functionality, hybrid or traditional server loads can be accommodated with ease.
This feature is supported on XenServer, VMware, and KVM hypervisors.
Ensure that vm-tools are running on guest VMs for adding or removing networks to work on VMware hypervisor.
15.6.2.2. Adding a Network
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, click Instances.
Choose the VM that you want to work with.
Click the NICs tab.
Click Add network to VM.
The Add network to VM dialog is displayed.
In the drop-down list, select the network that you would like to add this VM to.
A new NIC is added for this network. You can view the following details in the NICs page:
ID
Network Name
Type
IP Address
Gateway
Netmask
Is default
CIDR (for IPv6)
15.6.2.3. Removing a Network
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, click Instances.
Choose the VM that you want to work with.
Click the NICs tab.
Locate the NIC you want to remove.
Click Remove NIC button.
Click Yes to confirm.
15.6.2.4. Selecting the Default Network
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, click Instances.
Choose the VM that you want to work with.
Click the NICs tab.
Locate the NIC you want to work with.
Click the Set default NIC button.
Click Yes to confirm.
15.6.3. Changing the Network Offering on a Guest Network
A user or administrator can change the network offering that is associated with an existing guest network.
Log in to the CloudStack UI as an administrator or end user.
If you are changing from a network offering that uses the CloudStack virtual router to one that uses external devices as network service providers, you must first stop all the VMs on the network.
In the left navigation, choose Network.
Click the name of the network you want to modify.
In the Details tab, click Edit.
In Network Offering, choose the new network offering, then click Apply.
A prompt is displayed asking whether you want to keep the existing CIDR. This is to let you know that if you change the network offering, the CIDR will be affected.
If you upgrade between virtual router as a provider and an external network device as provider, acknowledge the change of CIDR to continue, so choose Yes.
Wait for the update to complete. Don’t try to restart VMs until the network change is complete.
If you stopped any VMs, restart them.
15.7. IP Reservation in Isolated Guest Networks
In isolated guest networks, a part of the guest IP address space can be reserved for non-CloudStack VMs or physical servers. To do so, you configure a range of Reserved IP addresses by specifying the CIDR when a guest network is in Implemented state. If your customers wish to have non-CloudStack controlled VMs or physical servers on the same network, they can share a part of the IP address space that is primarily provided to the guest network.
In an Advanced zone, an IP address range or a CIDR is assigned to a network when the network is defined. The CloudStack virtual router acts as the DHCP server and uses CIDR for assigning IP addresses to the guest VMs. If you decide to reserve CIDR for non-CloudStack purposes, you can specify a part of the IP address range or the CIDR that should only be allocated by the DHCP service of the virtual router to the guest VMs created in CloudStack. The remaining IPs in that network are called Reserved IP Range. When IP reservation is configured, the administrator can add additional VMs or physical servers that are not part of CloudStack to the same network and assign them the Reserved IP addresses. CloudStack guest VMs cannot acquire IPs from the Reserved IP Range.
15.7.1. IP Reservation Considerations
Consider the following before you reserve an IP range for non-CloudStack machines:
IP Reservation is supported only in Isolated networks.
IP Reservation can be applied only when the network is in Implemented state.
No IP Reservation is done by default.
Guest VM CIDR you specify must be a subset of the network CIDR.
Specify a valid Guest VM CIDR. IP Reservation is applied only if no active IPs exist outside the Guest VM CIDR.
You cannot apply IP Reservation if any VM is alloted with an IP address that is outside the Guest VM CIDR.
To reset an existing IP Reservation, apply IP reservation by specifying the value of network CIDR in the CIDR field.
For example, the following table describes three scenarios of guest network creation:
The IP Reservation is not supported if active IPs that are found outside the Guest VM CIDR.
Upgrading network offering which causes a change in CIDR (such as upgrading an offering with no external devices to one with external devices) IP Reservation becomes void if any. Reconfigure IP Reservation in the new re-implemeted network.
Apply IP Reservation to the guest network as soon as the network state changes to Implemented. If you apply reservation soon after the first guest VM is deployed, lesser conflicts occurs while applying reservation.
15.7.4. Reserving an IP Range
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click the name of the network you want to modify.
In the Details tab, click Edit.
The CIDR field changes to editable one.
In CIDR, specify the Guest VM CIDR.
Click Apply.
Wait for the update to complete. The Network CIDR and the Reserved IP Range are displayed on the Details page.
15.8. Reserving Public IP Addresses and VLANs for Accounts
CloudStack provides you the ability to reserve a set of public IP addresses and VLANs exclusively for an account. During zone creation, you can continue defining a set of VLANs and multiple public IP ranges. This feature extends the functionality to enable you to dedicate a fixed set of VLANs and guest IP addresses for a tenant.
Note that if an account has consumed all the VLANs and IPs dedicated to it, the account can acquire two more resources from the system. CloudStack provides the root admin with two configuration parameter to modify this default behavior—use.system.public.ips and use.system.guest.vlans. These global parameters enable the root admin to disallow an account from acquiring public IPs and guest VLANs from the system, if the account has dedicated resources and these dedicated resources have all been consumed. Both these configurations are configurable at the account level.
This feature provides you the following capabilities:
Reserve a VLAN range and public IP address range from an Advanced zone and assign it to an account
Disassociate a VLAN and public IP address range from an account
View the number of public IP addresses allocated to an account
Check whether the required range is available and is conforms to account limits.
The maximum IPs per account limit cannot be superseded.
15.8.1. Dedicating IP Address Ranges to an Account
Log in to the CloudStack UI as administrator.
In the left navigation bar, click Infrastructure.
In Zones, click View All.
Choose the zone you want to work with.
Click the Physical Network tab.
In the Public node of the diagram, click Configure.
Click the IP Ranges tab.
You can either assign an existing IP range to an account, or create a new IP range and assign to an account.
To assign an existing IP range to an account, perform the following:
Locate the IP range you want to work with.
Click Add Account

button.
The Add Account dialog is displayed.
Specify the following:
To create a new IP range and assign an account, perform the following:
Specify the following:
Click Add.
15.8.2. Dedicating VLAN Ranges to an Account
After the CloudStack Management Server is installed, log in to the CloudStack UI as administrator.
In the left navigation bar, click Infrastructure.
In Zones, click View All.
Choose the zone you want to work with.
Click the Physical Network tab.
In the Guest node of the diagram, click Configure.
Select the Dedicated VLAN Ranges tab.
Click Dedicate VLAN Range.
The Dedicate VLAN Range dialog is displayed.
Specify the following:
VLAN Range: The VLAN range that you want to assign to an account.
Account: The account to which you want to assign the selected VLAN range.
Domain: The domain associated with the account.
15.9. Configuring Multiple IP Addresses on a Single NIC
CloudStack provides you the ability to associate multiple private IP addresses per guest VM NIC. In addition to the primary IP, you can assign additional IPs to the guest VM NIC. This feature is supported on all the network configurations—Basic, Advanced, and VPC. Security Groups, Static NAT and Port forwarding services are supported on these additional IPs.
As always, you can specify an IP from the guest subnet; if not specified, an IP is automatically picked up from the guest VM subnet. You can view the IPs associated with for each guest VM NICs on the UI. You can apply NAT on these additional guest IPs by using network configuration option in the CloudStack UI. You must specify the NIC to which the IP should be associated.
This feature is supported on XenServer, KVM, and VMware hypervisors. Note that Basic zone security groups are not supported on VMware.
Some of the use cases are described below:
Network devices, such as firewalls and load balancers, generally work best when they have access to multiple IP addresses on the network interface.
Moving private IP addresses between interfaces or instances. Applications that are bound to specific IP addresses can be moved between instances.
Hosting multiple SSL Websites on a single instance. You can install multiple SSL certificates on a single instance, each associated with a distinct IP address.
To prevent IP conflict, configure different subnets when multiple networks are connected to the same VM.
15.9.3. Assigning Additional IPs to a VM
Log in to the CloudStack UI.
In the left navigation bar, click Instances.
Click the name of the instance you want to work with.
In the Details tab, click NICs.
Click View Secondary IPs.
Click Acquire New Secondary IP, and click Yes in the confirmation dialog.
You need to configure the IP on the guest VM NIC manually. CloudStack will not automatically configure the acquired IP address on the VM. Ensure that the IP address configuration persist on VM reboot.
Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in Port Forwarding or StaticNAT rules.
15.9.4. Port Forwarding and StaticNAT Services Changes
Because multiple IPs can be associated per NIC, you are allowed to select a desired IP for the Port Forwarding and StaticNAT services. The default is the primary IP. To enable this functionality, an extra optional parameter 'vmguestip' is added to the Port forwarding and StaticNAT APIs (enableStaticNat, createIpForwardingRule) to indicate on what IP address NAT need to be configured. If vmguestip is passed, NAT is configured on the specified private IP of the VM. if not passed, NAT is configured on the primary IP of the VM.
15.10. About Multiple IP Ranges
The feature can only be implemented on IPv4 addresses.
CloudStack provides you with the flexibility to add guest IP ranges from different subnets in Basic zones and security groups-enabled Advanced zones. For security groups-enabled Advanced zones, it implies multiple subnets can be added to the same VLAN. With the addition of this feature, you will be able to add IP address ranges from the same subnet or from a different one when IP address are exhausted. This would in turn allows you to employ higher number of subnets and thus reduce the address management overhead. To support this feature, the capability of createVlanIpRange
API is extended to add IP ranges also from a different subnet.
Ensure that you manually configure the gateway of the new subnet before adding the IP range. Note that CloudStack supports only one gateway for a subnet; overlapping subnets are not currently supported.
Use the deleteVlanRange
API to delete IP ranges. This operation fails if an IP from the remove range is in use. If the remove range contains the IP address on which the DHCP server is running, CloudStack acquires a new IP from the same subnet. If no IP is available in the subnet, the remove operation fails.
This feature is supported on KVM, xenServer, and VMware hypervisors.
Elastic IP (EIP) addresses are the IP addresses that are associated with an account, and act as static IP addresses. The account owner has the complete control over the Elastic IP addresses that belong to the account. As an account owner, you can allocate an Elastic IP to a VM of your choice from the EIP pool of your account. Later if required you can reassign the IP address to a different VM. This feature is extremely helpful during VM failure. Instead of replacing the VM which is down, the IP address can be reassigned to a new VM in your account.
Similar to the public IP address, Elastic IP addresses are mapped to their associated private IP addresses by using StaticNAT. The EIP service is equipped with StaticNAT (1:1) service in an EIP-enabled basic zone. The default network offering, DefaultSharedNetscalerEIPandELBNetworkOffering, provides your network with EIP and ELB network services if a NetScaler device is deployed in your zone. Consider the following illustration for more details.
In the illustration, a NetScaler appliance is the default entry or exit point for the CloudStack instances, and firewall is the default entry or exit point for the rest of the data center. Netscaler provides LB services and staticNAT service to the guest networks. The guest traffic in the pods and the Management Server are on different subnets / VLANs. The policy-based routing in the data center core switch sends the public traffic through the NetScaler, whereas the rest of the data center goes through the firewall.
The EIP work flow is as follows:
When a user VM is deployed, a public IP is automatically acquired from the pool of public IPs configured in the zone. This IP is owned by the VM's account.
Each VM will have its own private IP. When the user VM starts, Static NAT is provisioned on the NetScaler device by using the Inbound Network Address Translation (INAT) and Reverse NAT (RNAT) rules between the public IP and the private IP.
Inbound NAT (INAT) is a type of NAT supported by NetScaler, in which the destination IP address is replaced in the packets from the public network, such as the Internet, with the private IP address of a VM in the private network. Reverse NAT (RNAT) is a type of NAT supported by NetScaler, in which the source IP address is replaced in the packets generated by a VM in the private network with the public IP address.
This default public IP will be released in two cases:
When the VM is stopped. When the VM starts, it again receives a new public IP, not necessarily the same one allocated initially, from the pool of Public IPs.
The user acquires a public IP (Elastic IP). This public IP is associated with the account, but will not be mapped to any private IP. However, the user can enable Static NAT to associate this IP to the private IP of a VM in the account. The Static NAT rule for the public IP can be disabled at any time. When Static NAT is disabled, a new public IP is allocated from the pool, which is not necessarily be the same one allocated initially.
For the deployments where public IPs are limited resources, you have the flexibility to choose not to allocate a public IP by default. You can use the Associate Public IP option to turn on or off the automatic public IP assignment in the EIP-enabled Basic zones. If you turn off the automatic public IP assignment while creating a network offering, only a private IP is assigned to a VM when the VM is deployed with that network offering. Later, the user can acquire an IP for the VM and enable static NAT.
For more information on the Associate Public IP option, see the Administration Guide.
The Associate Public IP feature is designed only for use with user VMs. The System VMs continue to get both public IP and private by default, irrespective of the network offering configuration.
New deployments which use the default shared network offering with EIP and ELB services to create a shared network in the Basic zone will continue allocating public IPs to each user VM.
15.12.1. About Portable IP
Portable IPs in CloudStack are region-level pool of IPs, which are elastic in nature, that can be transferred across geographically separated zones. As an administrator, you can provision a pool of portable public IPs at region level and are available for user consumption. The users can acquire portable IPs if admin has provisioned portable IPs at the region level they are part of. These IPs can be use for any service within an advanced zone. You can also use portable IPs for EIP services in basic zones.
The salient features of Portable IP are as follows:
IP is statically allocated
IP need not be associated with a network
IP association is transferable across networks
IP is transferable across both Basic and Advanced zones
IP is transferable across VPC, non-VPC isolated and shared networks
Portable IP transfer is available only for static NAT.
Before transferring to another network, ensure that no network rules (Firewall, Static NAT, Port Forwarding, and so on) exist on that portable IP.
15.12.2. Configuring Portable IPs
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, click Regions.
Choose the Regions that you want to work with.
Click View Portable IP.
Click Portable IP Range.
The Add Portable IP Range window is displayed.
Specify the following:
Start IP/ End IP: A range of IP addresses that are accessible from the Internet and will be allocated to guest VMs. Enter the first and last IP addresses that define a range that CloudStack can assign to guest VMs.
Gateway: The gateway in use for the Portable IP addresses you are configuring.
Netmask: The netmask associated with the Portable IP range.
VLAN: The VLAN that will be used for public traffic.
Click OK.
15.12.3. Acquiring a Portable IP
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click the name of the network where you want to work with.
Click View IP Addresses.
Click Acquire New IP.
The Acquire New IP window is displayed.
Specify whether you want cross-zone IP or not.
Click Yes in the confirmation dialog.
Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding or static NAT rules.
15.12.4. Transferring Portable IP
An IP can be transferred from one network to another only if Static NAT is enabled. However, when a portable IP is associated with a network, you can use it for any service in the network.
To transfer a portable IP across the networks, execute the following API:
http://localhost:8096/client/api?command=enableStaticNat&response=json&ipaddressid=a4bc37b2-4b4e-461d-9a62-b66414618e36&virtualmachineid=a242c476-ef37-441e-9c7b-b303e2a9cb4f&networkid=6e7cd8d1-d1ba-4c35-bdaf-333354cbd49810
Replace the UUID with appropriate UUID. For example, if you want to transfer a portable IP to network X and VM Y in a network, execute the following:
http://localhost:8096/client/api?command=enableStaticNat&response=json&ipaddressid=a4bc37b2-4b4e-461d-9a62-b66414618e36&virtualmachineid=Y&networkid=X
15.13. Multiple Subnets in Shared Network
CloudStack provides you with the flexibility to add guest IP ranges from different subnets in Basic zones and security groups-enabled Advanced zones. For security groups-enabled Advanced zones, it implies multiple subnets can be added to the same VLAN. With the addition of this feature, you will be able to add IP address ranges from the same subnet or from a different one when IP address are exhausted. This would in turn allows you to employ higher number of subnets and thus reduce the address management overhead. You can delete the IP ranges you have added.
15.13.1. Prerequisites and Guidelines
This feature can only be implemented:
on IPv4 addresses
if virtual router is the DHCP provider
on KVM, xenServer, and VMware hypervisors
Manually configure the gateway of the new subnet before adding the IP range.
CloudStack supports only one gateway for a subnet; overlapping subnets are not currently supported
15.13.2. Adding Multiple Subnets to a Shared Network
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Infrastructure.
On Zones, click View More, then click the zone to which you want to work with..
Click Physical Network.
In the Guest node of the diagram, click Configure.
Click Networks.
Select the networks you want to work with.
Click View IP Ranges.
Click Add IP Range.
The Add IP Range dialog is displayed, as follows:
Specify the following:
All the fields are mandatory.
Gateway: The gateway for the tier you create. Ensure that the gateway is within the Super CIDR range that you specified while creating the VPC, and is not overlapped with the CIDR of any existing tier within the VPC.
Netmask: The netmask for the tier you create.
For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is 255.255.255.0.
Start IP/ End IP: A range of IP addresses that are accessible from the Internet and will be allocated to guest VMs. Enter the first and last IP addresses that define a range that CloudStack can assign to guest VMs .
Click OK.
15.14. Isolation in Advanced Zone Using Private VLAN
Isolation of guest traffic in shared networks can be achieved by using Private VLANs (PVLAN). PVLANs provide Layer 2 isolation between ports within the same VLAN. In a PVLAN-enabled shared network, a user VM cannot reach other user VM though they can reach the DHCP server and gateway, this would in turn allow users to control traffic within a network and help them deploy multiple applications without communication between application as well as prevent communication with other users’ VMs.
Isolate VMs in a shared networks by using Private VLANs.
Supported on KVM, XenServer, and VMware hypervisors
PVLAN-enabled shared network can be a part of multiple networks of a guest VM.
15.14.1. About Private VLAN
In an Ethernet switch, a VLAN is a broadcast domain where hosts can establish direct communication with each another at Layer 2. Private VLAN is designed as an extension of VLAN standard to add further segmentation of the logical broadcast domain. A regular VLAN is a single broadcast domain, whereas a private VLAN partitions a larger VLAN broadcast domain into smaller sub-domains. A sub-domain is represented by a pair of VLANs: a Primary VLAN and a Secondary VLAN. The original VLAN that is being divided into smaller groups is called Primary, which implies that all VLAN pairs in a private VLAN share the same Primary VLAN. All the secondary VLANs exist only inside the Primary. Each Secondary VLAN has a specific VLAN ID associated to it, which differentiates one sub-domain from another.
Three types of ports exist in a private VLAN domain, which essentially determine the behaviour of the participating hosts. Each ports will have its own unique set of rules, which regulate a connected host's ability to communicate with other connected host within the same private VLAN domain. Configure each host that is part of a PVLAN pair can be by using one of these three port designation:
Promiscuous: A promiscuous port can communicate with all the interfaces, including the community and isolated host ports that belong to the secondary VLANs. In Promiscuous mode, hosts are connected to promiscuous ports and are able to communicate directly with resources on both primary and secondary VLAN. Routers, DHCP servers, and other trusted devices are typically attached to promiscuous ports.
Isolated VLANs: The ports within an isolated VLAN cannot communicate with each other at the layer-2 level. The hosts that are connected to Isolated ports can directly communicate only with the Promiscuous resources. If your customer device needs to have access only to a gateway router, attach it to an isolated port.
Community VLANs: The ports within a community VLAN can communicate with each other and with the promiscuous ports, but they cannot communicate with the ports in other communities at the layer-2 level. In a Community mode, direct communication is permitted only with the hosts in the same community and those that are connected to the Primary PVLAN in promiscuous mode. If your customer has two devices that need to be isolated from other customers' devices, but to be able to communicate among themselves, deploy them in community ports.
For further reading:
Use a PVLAN supported switch.
All the layer 2 switches, which are PVLAN-aware, are connected to each other, and one of them is connected to a router. All the ports connected to the host would be configured in trunk mode. Open Management VLAN, Primary VLAN (public) and Secondary Isolated VLAN ports. Configure the switch port connected to the router in PVLAN promiscuous trunk mode, which would translate an isolated VLAN to primary VLAN for the PVLAN-unaware router.
Note that only Cisco Catalyst 4500 has the PVLAN promiscuous trunk mode to connect both normal VLAN and PVLAN to a PVLAN-unaware switch. For the other Catalyst PVLAN support switch, connect the switch to upper switch by using cables, one each for a PVLAN pair.
Configure private VLAN on your physical switches out-of-band.
Before you use PVLAN on XenServer and KVM, enable Open vSwitch (OVS).
OVS on XenServer and KVM does not support PVLAN natively. Therefore, CloudStack managed to simulate PVLAN on OVS for XenServer and KVM by modifying the flow table.
15.14.3. Creating a PVLAN-Enabled Guest Network
Log in to the CloudStack UI as administrator.
In the left navigation, choose Infrastructure.
On Zones, click View More.
Click the zone to which you want to add a guest network.
Click the Physical Network tab.
Click the physical network you want to work with.
On the Guest node of the diagram, click Configure.
Click the Network tab.
Click Add guest network.
The Add guest network window is displayed.
Specify the following:
Name: The name of the network. This will be visible to the user.
Description: The short description of the network that can be displayed to users.
VLAN ID: The unique ID of the VLAN.
Secondary Isolated VLAN ID: The unique ID of the Secondary Isolated VLAN.
Scope: The available scopes are Domain, Account, Project, and All.
Domain: Selecting Domain limits the scope of this guest network to the domain you specify. The network will not be available for other domains. If you select Subdomain Access, the guest network is available to all the sub domains within the selected domain.
Account: The account for which the guest network is being created for. You must specify the domain the account belongs to.
Project: The project for which the guest network is being created for. You must specify the domain the project belongs to.
All: The guest network is available for all the domains, account, projects within the selected zone.
Network Offering: If the administrator has configured multiple network offerings, select the one you want to use for this network.
Gateway: The gateway that the guests should use.
Netmask: The netmask in use on the subnet the guests will use.
IP Range: A range of IP addresses that are accessible from the Internet and are assigned to the guest VMs.
Network Domain: A custom DNS suffix at the level of a network. If you want to assign a special domain name to the guest VM network, specify a DNS suffix.
Click OK to confirm.
15.15.1. About Security Groups
Security groups provide a way to isolate traffic to VMs. A security group is a group of VMs that filter their incoming and outgoing traffic according to a set of rules, called ingress and egress rules. These rules filter network traffic according to the IP address that is attempting to communicate with the VM. Security groups are particularly useful in zones that use basic networking, because there is a single guest network for all guest VMs. In advanced zones, security groups are supported only on the KVM hypervisor.
In a zone that uses advanced networking, you can instead define multiple guest networks to isolate traffic to VMs.
Each CloudStack account comes with a default security group that denies all inbound traffic and allows all outbound traffic. The default security group can be modified so that all new VMs inherit some other desired set of rules.
Any CloudStack user can set up any number of additional security groups. When a new VM is launched, it is assigned to the default security group unless another user-defined security group is specified. A VM can be a member of any number of security groups. Once a VM is assigned to a security group, it remains in that group for its entire lifetime; you can not move a running VM from one security group to another.
You can modify a security group by deleting or adding any number of ingress and egress rules. When you do, the new rules apply to all VMs in the group, whether running or stopped.
If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.
15.15.2. Adding a Security Group
A user or administrator can define a new security group.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network
In Select view, choose Security Groups.
Click Add Security Group.
Provide a name and description.
Click OK.
The new security group appears in the Security Groups Details tab.
To make the security group useful, continue to Adding Ingress and Egress Rules to a Security Group.
15.15.3. Security Groups in Advanced Zones (KVM Only)
CloudStack provides the ability to use security groups to provide isolation between guests on a single shared, zone-wide network in an advanced zone where KVM is the hypervisor. Using security groups in advanced zones rather than multiple VLANs allows a greater range of options for setting up guest isolation in a cloud.
The following are not supported for this feature:
Two IP ranges with the same VLAN and different gateway or netmask in security group-enabled shared network.
Two IP ranges with the same VLAN and different gateway or netmask in account-specific shared networks.
Multiple VLAN ranges in security group-enabled shared network.
Multiple VLAN ranges in account-specific shared networks.
Security groups must be enabled in the zone in order for this feature to be used.
15.15.4. Enabling Security Groups
In order for security groups to function in a zone, the security groups feature must first be enabled for the zone. The administrator can do this when creating a new zone, by selecting a network offering that includes security groups. The procedure is described in Basic Zone Configuration in the Advanced Installation Guide. The administrator can not enable security groups for an existing zone, only when creating a new zone.
15.15.5. Adding Ingress and Egress Rules to a Security Group
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network
In Select view, choose Security Groups, then click the security group you want .
To add an ingress rule, click the Ingress Rules tab and fill out the following fields to specify what network traffic is allowed into VM instances in this security group. If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.
Add by CIDR/Account. Indicate whether the source of the traffic will be defined by IP address (CIDR) or an existing security group in a CloudStack account (Account). Choose Account if you want to allow incoming traffic from all VMs in another security group
Protocol. The networking protocol that sources will use to send traffic to the security group. TCP and UDP are typically used for data exchange and end-user communications. ICMP is typically used to send error messages or network monitoring data.
Start Port, End Port. (TCP, UDP only) A range of listening ports that are the destination for the incoming traffic. If you are opening a single port, use the same number in both fields.
ICMP Type, ICMP Code. (ICMP only) The type of message and error code that will be accepted.
CIDR. (Add by CIDR only) To accept only traffic from IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the incoming traffic. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
Account, Security Group. (Add by Account only) To accept only traffic from another security group, enter the CloudStack account and name of a security group that has already been defined in that account. To allow traffic between VMs within the security group you are editing now, enter the same name you used in step 7.
The following example allows inbound HTTP access from anywhere:
To add an egress rule, click the Egress Rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this security group. If no egress rules are specified, then all traffic will be allowed out. Once egress rules are specified, the following types of traffic are allowed out: traffic specified in egress rules; queries to DNS and DHCP servers; and responses to any traffic that has been allowed in through an ingress rule
Add by CIDR/Account. Indicate whether the destination of the traffic will be defined by IP address (CIDR) or an existing security group in a CloudStack account (Account). Choose Account if you want to allow outgoing traffic to all VMs in another security group.
Protocol. The networking protocol that VMs will use to send outgoing traffic. TCP and UDP are typically used for data exchange and end-user communications. ICMP is typically used to send error messages or network monitoring data.
Start Port, End Port. (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields.
ICMP Type, ICMP Code. (ICMP only) The type of message and error code that will be sent
CIDR. (Add by CIDR only) To send traffic only to IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
Account, Security Group. (Add by Account only) To allow traffic to be sent to another security group, enter the CloudStack account and name of a security group that has already been defined in that account. To allow traffic between VMs within the security group you are editing now, enter its name.
Click Add.
15.16. External Firewalls and Load Balancers
CloudStack is capable of replacing its Virtual Router with an external Juniper SRX device and an optional external NetScaler or F5 load balancer for gateway and load balancing services. In this case, the VMs use the SRX as their gateway.
15.16.1. About Using a NetScaler Load Balancer
Citrix NetScaler is supported as an external network element for load balancing in zones that use isolated networking in advanced zones. Set up an external load balancer when you want to provide load balancing through means other than CloudStack’s provided virtual router.
In a Basic zone, load balancing service is supported only if Elastic IP or Elastic LB services are enabled.
When NetScaler load balancer is used to provide EIP or ELB services in a Basic zone, ensure that all guest VM traffic must enter and exit through the NetScaler device. When inbound traffic goes through the NetScaler device, traffic is routed by using the NAT protocol depending on the EIP/ELB configured on the public IP to the private IP. The traffic that is originated from the guest VMs usually goes through the layer 3 router. To ensure that outbound traffic goes through NetScaler device providing EIP/ELB, layer 3 router must have a policy-based routing. A policy-based route must be set up so that all traffic originated from the guest VM's are directed to NetScaler device. This is required to ensure that the outbound traffic from the guest VM's is routed to a public IP by using NAT.For more information on Elastic IP, see
Section 15.11, “About Elastic IP”.
The NetScaler can be set up in direct (outside the firewall) mode. It must be added before any load balancing rules are deployed on guest VMs in the zone.
The functional behavior of the NetScaler with CloudStack is the same as described in the CloudStack documentation for using an F5 external load balancer. The only exception is that the F5 supports routing domains, and NetScaler does not. NetScaler can not yet be used as a firewall.
The Citrix NetScaler comes in three varieties. The following table summarizes how these variants are treated in CloudStack.
15.16.3. Initial Setup of External Firewalls and Load Balancers
When the first VM is created for a new account, CloudStack programs the external firewall and load balancer to work with the VM. The following objects are created on the firewall:
A new logical interface to connect to the account's private VLAN. The interface IP is always the first IP of the account's private subnet (e.g. 10.1.1.1).
A source NAT rule that forwards all outgoing traffic from the account's private VLAN to the public Internet, using the account's public IP address as the source address
A firewall filter counter that measures the number of bytes of outgoing traffic for the account
The following objects are created on the load balancer:
15.16.4. Ongoing Configuration of External Firewalls and Load Balancers
Additional user actions (e.g. setting a port forward) will cause further programming of the firewall and load balancer. A user may request additional public IP addresses and forward traffic received at these IPs to specific VMs. This is accomplished by enabling static NAT for a public IP address, assigning the IP to a VM, and specifying a set of protocols and port ranges to open. When a static NAT rule is created, CloudStack programs the zone's external firewall with the following objects:
A static NAT rule that maps the public IP address to the private IP address of a VM.
A security policy that allows traffic within the set of protocols and port ranges that are specified.
A firewall filter counter that measures the number of bytes of incoming traffic to the public IP.
The number of incoming and outgoing bytes through source NAT, static NAT, and load balancing rules is measured and saved on each external element. This data is collected on a regular basis and stored in the CloudStack database.
15.16.5. Load Balancer Rules
A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs.
If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudStack virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function.
15.16.5.1. Adding a Load Balancer Rule
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click the name of the network where you want to load balance the traffic.
Click View IP Addresses.
Click the IP address for which you want to create the rule, then click the Configuration tab.
In the Load Balancing node of the diagram, click View All.
In a Basic zone, you can also create a load balancing rule without acquiring or selecting an IP address. CloudStack internally assign an IP when you create the load balancing rule, which is listed in the IP Addresses page when the rule is created.
To do that, select the name of the network, then click Add Load Balancer tab. Continue with
7.
Fill in the following:
Name: A name for the load balancer rule.
Public Port: The port receiving incoming traffic to be balanced.
Private Port: The port that the VMs will use to receive the traffic.
Algorithm: Choose the load balancing algorithm you want CloudStack to use. CloudStack supports a variety of well-known algorithms. If you are not familiar with these choices, you will find plenty of information about them on the Internet.
Stickiness: (Optional) Click Configure and choose the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules.
Ping path (Optional): Sequence of destinations to which to send health check queries. Default: / (all).
Response time (Optional): How long to wait for a response from the health check (2 - 60 seconds). Default: 5 seconds.
Interval time (Optional): Amount of time between health checks (1 second - 5 minutes). Default value is set in the global configuration parameter lbrule_health check_time_interval.
Healthy threshold (Optional): Number of consecutive health check successes that are required before declaring an instance healthy. Default: 2.
Unhealthy threshold (Optional): Number of consecutive health check failures that are required before declaring an instance unhealthy. Default: 10.
Click Add VMs, then select two or more VMs that will divide the load of incoming traffic, and click Apply.
The new load balancer rule appears in the list. You can repeat these steps to add more load balancer rules for this IP address.
15.16.5.2. Sticky Session Policies for Load Balancer Rules
Sticky sessions are used in Web-based applications to ensure continued availability of information across the multiple requests in a user's session. For example, if a shopper is filling a cart, you need to remember what has been purchased so far. The concept of "stickiness" is also referred to as persistence or maintaining state.
Any load balancer rule defined in CloudStack can have a stickiness policy. The policy consists of a name, stickiness method, and parameters. The parameters are name-value pairs or flags, which are defined by the load balancer vendor. The stickiness method could be load balancer-generated cookie, application-generated cookie, or source-based. In the source-based method, the source IP address is used to identify the user and locate the user’s stored data. In the other methods, cookies are used. The cookie generated by the load balancer or application is included in request and response URLs to create persistence. The cookie name can be specified by the administrator or automatically generated. A variety of options are provided to control the exact behavior of cookies, such as how they are generated and whether they are cached.
For the most up to date list of available stickiness methods, see the CloudStack UI or call listNetworks and check the SupportedStickinessMethods capability.
15.16.5.3. Health Checks for Load Balancer Rules
(NetScaler load balancer only; requires NetScaler version 10.0)
Health checks are used in load-balanced applications to ensure that requests are forwarded only to running, available services. When creating a load balancer rule, you can specify a health check policy. This is in addition to specifying the stickiness policy, algorithm, and other load balancer rule options. You can configure one health check policy per load balancer rule.
Any load balancer rule defined on a NetScaler load balancer in CloudStack can have a health check policy. The policy consists of a ping path, thresholds to define "healthy" and "unhealthy" states, health check frequency, and timeout wait interval.
When a health check policy is in effect, the load balancer will stop forwarding requests to any resources that are found to be unhealthy. If the resource later becomes available again, the periodic health check will discover it, and the resource will once again be added to the pool of resources that can receive requests from the load balancer. At any given time, the most recent result of the health check is displayed in the UI. For any VM that is attached to a load balancer rule with a health check configured, the state will be shown as UP or DOWN in the UI depending on the result of the most recent health check.
You can delete or modify existing health check policies.
To configure how often the health check is performed by default, use the global configuration setting healthcheck.update.interval (default value is 600 seconds). You can override this value for an individual health check policy.
15.16.6. Configuring AutoScale
AutoScaling allows you to scale your back-end services or application VMs up or down seamlessly and automatically according to the conditions you define. With AutoScaling enabled, you can ensure that the number of VMs you are using seamlessly scale up when demand increases, and automatically decreases when demand subsides. Thus it helps you save compute costs by terminating underused VMs automatically and launching new VMs when you need them, without the need for manual intervention.
NetScaler AutoScaling is designed to seamlessly launch or terminate VMs based on user-defined conditions. Conditions for triggering a scaleup or scaledown action can vary from a simple use case like monitoring the CPU usage of a server to a complex use case of monitoring a combination of server's responsiveness and its CPU usage. For example, you can configure AutoScaling to launch an additional VM whenever CPU usage exceeds 80 percent for 15 minutes, or to remove a VM whenever CPU usage is less than 20 percent for 30 minutes.
CloudStack uses the NetScaler load balancer to monitor all aspects of a system's health and work in unison with CloudStack to initiate scale-up or scale-down actions.
AutoScale is supported on NetScaler Release 10 Build 73.e and beyond.
Before you configure an AutoScale rule, consider the following:
Ensure that the necessary template is prepared before configuring AutoScale. When a VM is deployed by using a template and when it comes up, the application should be up and running.
If the application is not running, the NetScaler device considers the VM as ineffective and continues provisioning the VMs unconditionally until the resource limit is exhausted.
Deploy the templates you prepared. Ensure that the applications come up on the first boot and is ready to take the traffic. Observe the time requires to deploy the template. Consider this time when you specify the quiet time while configuring AutoScale.
The AutoScale feature supports the SNMP counters that can be used to define conditions for taking scale up or scale down actions. To monitor the SNMP-based counter, ensure that the SNMP agent is installed in the template used for creating the AutoScale VMs, and the SNMP operations work with the configured SNMP community and port by using standard SNMP managers. For example, see
Section 15.16.2, “Configuring SNMP Community String on a RHEL Server” to configure SNMP on a RHEL machine.
Ensure that the endpointe.url parameter present in the Global Settings is set to the Management Server API URL. For example, http://10.102.102.22:8080/client/api. In a multi-node Management Server deployment, use the virtual IP address configured in the load balancer for the management server’s cluster. Additionally, ensure that the NetScaler device has access to this IP address to provide AutoScale support.
If you update the endpointe.url, disable the AutoScale functionality of the load balancer rules in the system, then enable them back to reflect the changes. For more information see
Updating an AutoScale Configuration
If the API Key and Secret Key are regenerated for an AutoScale user, ensure that the AutoScale functionality of the load balancers that the user participates in are disabled and then enabled to reflect the configuration changes in the NetScaler.
In an advanced Zone, ensure that at least one VM should be present before configuring a load balancer rule with AutoScale. Having one VM in the network ensures that the network is in implemented state for configuring AutoScale.
Specify the following:
Template: A template consists of a base OS image and application. A template is used to provision the new instance of an application on a scaleup action. When a VM is deployed from a template, the VM can start taking the traffic from the load balancer without any admin intervention. For example, if the VM is deployed for a Web service, it should have the Web server running, the database connected, and so on.
Compute offering: A predefined set of virtual hardware attributes, including CPU speed, number of CPUs, and RAM size, that the user can select when creating a new virtual machine instance. Choose one of the compute offerings to be used while provisioning a VM instance as part of scaleup action.
Min Instance: The minimum number of active VM instances that is assigned to a load balancing rule. The active VM instances are the application instances that are up and serving the traffic, and are being load balanced. This parameter ensures that a load balancing rule has at least the configured number of active VM instances are available to serve the traffic.
If an application, such as SAP, running on a VM instance is down for some reason, the VM is then not counted as part of Min Instance parameter, and the AutoScale feature initiates a scaleup action if the number of active VM instances is below the configured value. Similarly, when an application instance comes up from its earlier down state, this application instance is counted as part of the active instance count and the AutoScale process initiates a scaledown action when the active instance count breaches the Max instance value.
Max Instance: Maximum number of active VM instances that should be assigned to a load balancing rule. This parameter defines the upper limit of active VM instances that can be assigned to a load balancing rule.
Specifying a large value for the maximum instance parameter might result in provisioning large number of VM instances, which in turn leads to a single load balancing rule exhausting the VM instances limit specified at the account or domain level.
If an application, such as SAP, running on a VM instance is down for some reason, the VM is not counted as part of Max Instance parameter. So there may be scenarios where the number of VMs provisioned for a scaleup action might be more than the configured Max Instance value. Once the application instances in the VMs are up from an earlier down state, the AutoScale feature starts aligning to the configured Max Instance value.
Specify the following scale-up and scale-down policies:
Duration: The duration, in seconds, for which the conditions you specify must be true to trigger a scaleup action. The conditions defined should hold true for the entire duration you specify for an AutoScale action to be invoked.
Counter: The performance counters expose the state of the monitored instances. By default, CloudStack offers four performance counters: Three SNMP counters and one NetScaler counter. The SNMP counters are Linux User CPU, Linux System CPU, and Linux CPU Idle. The NetScaler counter is ResponseTime. The root administrator can add additional counters into CloudStack by using the CloudStack API.
Operator: The following five relational operators are supported in AutoScale feature: Greater than, Less than, Less than or equal to, Greater than or equal to, and Equal to.
Threshold: Threshold value to be used for the counter. Once the counter defined above breaches the threshold value, the AutoScale feature initiates a scaleup or scaledown action.
Add: Click Add to add the condition.
Additionally, if you want to configure the advanced settings, click Show advanced settings, and specify the following:
Polling interval: Frequency in which the conditions, combination of counter, operator and threshold, are to be evaluated before taking a scale up or down action. The default polling interval is 30 seconds.
Quiet Time: This is the cool down period after an AutoScale action is initiated. The time includes the time taken to complete provisioning a VM instance from its template and the time taken by an application to be ready to serve traffic. This quiet time allows the fleet to come up to a stable state before any action can take place. The default is 300 seconds.
Destroy VM Grace Period: The duration in seconds, after a scaledown action is initiated, to wait before the VM is destroyed as part of scaledown action. This is to ensure graceful close of any pending sessions or transactions being served by the VM marked for destroy. The default is 120 seconds.
Security Groups: Security groups provide a way to isolate traffic to the VM instances. A security group is a group of VMs that filter their incoming and outgoing traffic according to a set of rules, called ingress and egress rules. These rules filter network traffic according to the IP address that is attempting to communicate with the VM.
Disk Offerings: A predefined set of disk size for primary data storage.
SNMP Community: The SNMP community string to be used by the NetScaler device to query the configured counter value from the provisioned VM instances. Default is public.
SNMP Port: The port number on which the SNMP agent that run on the provisioned VMs is listening. Default port is 161.
User: This is the user that the NetScaler device use to invoke scaleup and scaledown API calls to the cloud. If no option is specified, the user who configures AutoScaling is applied. Specify another user name to override.
Apply: Click Apply to create the AutoScale configuration.
The button toggles between enable and disable, depending on whether AutoScale is currently enabled or not. After the maintenance operations are done, you can enable the AutoScale configuration back. To enable, open the AutoScale configuration page again, then click the Enable AutoScale

button.
You can update the various parameters and add or delete the conditions in a scaleup or scaledown rule. Before you update an AutoScale configuration, ensure that you disable the AutoScale load balancer rule by clicking the Disable AutoScale button.
After you modify the required AutoScale parameters, click Apply. To apply the new AutoScale policies, open the AutoScale configuration page again, then click the Enable AutoScale button.
An administrator should not assign a VM to a load balancing rule which is configured for AutoScale.
Before a VM provisioning is completed if NetScaler is shutdown or restarted, the provisioned VM cannot be a part of the load balancing rule though the intent was to assign it to a load balancing rule. To workaround, rename the AutoScale provisioned VMs based on the rule name or ID so at any point of time the VMs can be reconciled to its load balancing rule.
Making API calls outside the context of AutoScale, such as destroyVM, on an autoscaled VM leaves the load balancing configuration in an inconsistent state. Though VM is destroyed from the load balancer rule, NetScaler continues to show the VM as a service assigned to a rule.
15.17. Global Server Load Balancing Support
CloudStack supports Global Server Load Balancing (GSLB) functionalities to provide business continuity, and enable seamless resource movement within a CloudStack environment. CloudStack achieve this by extending its functionality of integrating with NetScaler Application Delivery Controller (ADC), which also provides various GSLB capabilities, such as disaster recovery and load balancing. The DNS redirection technique is used to achieve GSLB in CloudStack.
In order to support this functionality, region level services and service provider are introduced. A new service 'GSLB' is introduced as a region level service. The GSLB service provider is introduced that will provider the GSLB service. Currently, NetScaler is the supported GSLB provider in CloudStack. GSLB functionality works in an Active-Active data center environment.
15.17.1. About Global Server Load Balancing
Global Server Load Balancing (GSLB) is an extension of load balancing functionality, which is highly efficient in avoiding downtime. Based on the nature of deployment, GSLB represents a set of technologies that is used for various purposes, such as load sharing, disaster recovery, performance, and legal obligations. With GSLB, workloads can be distributed across multiple data centers situated at geographically separated locations. GSLB can also provide an alternate location for accessing a resource in the event of a failure, or to provide a means of shifting traffic easily to simplify maintenance, or both.
15.17.1.1. Components of GSLB
A typical GSLB environment is comprised of the following components:
GSLB Site: In CloudStack terminology, GSLB sites are represented by zones that are mapped to data centers, each of which has various network appliances. Each GSLB site is managed by a NetScaler appliance that is local to that site. Each of these appliances treats its own site as the local site and all other sites, managed by other appliances, as remote sites. It is the central entity in a GSLB deployment, and is represented by a name and an IP address.
GSLB Services: A GSLB service is typically represented by a load balancing or content switching virtual server. In a GSLB environment, you can have a local as well as remote GSLB services. A local GSLB service represents a local load balancing or content switching virtual server. A remote GSLB service is the one configured at one of the other sites in the GSLB setup. At each site in the GSLB setup, you can create one local GSLB service and any number of remote GSLB services.
GSLB Virtual Servers: A GSLB virtual server refers to one or more GSLB services and balances traffic between traffic across the VMs in multiple zones by using the CloudStack functionality. It evaluates the configured GSLB methods or algorithms to select a GSLB service to which to send the client requests. One or more virtual servers from different zones are bound to the GSLB virtual server. GSLB virtual server does not have a public IP associated with it, instead it will have a FQDN DNS name.
Load Balancing or Content Switching Virtual Servers: According to Citrix NetScaler terminology, a load balancing or content switching virtual server represents one or many servers on the local network. Clients send their requests to the load balancing or content switching virtual server’s virtual IP (VIP) address, and the virtual server balances the load across the local servers. After a GSLB virtual server selects a GSLB service representing either a local or a remote load balancing or content switching virtual server, the client sends the request to that virtual server’s VIP address.
DNS VIPs: DNS virtual IP represents a load balancing DNS virtual server on the GSLB service provider. The DNS requests for domains for which the GSLB service provider is authoritative can be sent to a DNS VIP.
Authoritative DNS: ADNS (Authoritative Domain Name Server) is a service that provides actual answer to DNS queries, such as web site IP address. In a GSLB environment, an ADNS service responds only to DNS requests for domains for which the GSLB service provider is authoritative. When an ADNS service is configured, the service provider owns that IP address and advertises it. When you create an ADNS service, the NetScaler responds to DNS queries on the configured ADNS service IP and port.
15.17.1.2. How Does GSLB Works in CloudStack?
Global server load balancing is used to manage the traffic flow to a web site hosted on two separate zones that ideally are in different geographic locations. The following is an illustration of how GLSB functionality is provided in CloudStack: An organization, xyztelco, has set up a public cloud that spans two zones, Zone-1 and Zone-2, across geographically separated data centers that are managed by CloudStack. Tenant-A of the cloud launches a highly available solution by using xyztelco cloud. For that purpose, they launch two instances each in both the zones: VM1 and VM2 in Zone-1 and VM5 and VM6 in Zone-2. Tenant-A acquires a public IP, IP-1 in Zone-1, and configures a load balancer rule to load balance the traffic between VM1 and VM2 instances. CloudStack orchestrates setting up a virtual server on the LB service provider in Zone-1. Virtual server 1 that is set up on the LB service provider in Zone-1 represents a publicly accessible virtual server that client reaches at IP-1. The client traffic to virtual server 1 at IP-1 will be load balanced across VM1 and VM2 instances.
Tenant-A acquires another public IP, IP-2 in Zone-2 and sets up a load balancer rule to load balance the traffic between VM5 and VM6 instances. Similarly in Zone-2, CloudStack orchestrates setting up a virtual server on the LB service provider. Virtual server 2 that is setup on the LB service provider in Zone-2 represents a publicly accessible virtual server that client reaches at IP-2. The client traffic that reaches virtual server 2 at IP-2 is load balanced across VM5 and VM6 instances. At this point Tenant-A has the service enabled in both the zones, but has no means to set up a disaster recovery plan if one of the zone fails. Additionally, there is no way for Tenant-A to load balance the traffic intelligently to one of the zones based on load, proximity and so on. The cloud administrator of xyztelco provisions a GSLB service provider to both the zones. A GSLB provider is typically an ADC that has the ability to act as an ADNS (Authoritative Domain Name Server) and has the mechanism to monitor health of virtual servers both at local and remote sites. The cloud admin enables GSLB as a service to the tenants that use zones 1 and 2.
Tenant-A wishes to leverage the GSLB service provided by the xyztelco cloud. Tenant-A configures a GSLB rule to load balance traffic across virtual server 1 at Zone-1 and virtual server 2 at Zone-2. The domain name is provided as A.xyztelco.com. CloudStack orchestrates setting up GSLB virtual server 1 on the GSLB service provider at Zone-1. CloudStack binds virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 1. GSLB virtual server 1 is configured to start monitoring the health of virtual server 1 and 2 in Zone-1. CloudStack will also orchestrate setting up GSLB virtual server 2 on GSLB service provider at Zone-2. CloudStack will bind virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 2. GSLB virtual server 2 is configured to start monitoring the health of virtual server 1 and 2. CloudStack will bind the domain A.xyztelco.com to both the GSLB virtual server 1 and 2. At this point, Tenant-A service will be globally reachable at A.xyztelco.com. The private DNS server for the domain xyztelcom.com is configured by the admin out-of-band to resolve the domain A.xyztelco.com to the GSLB providers at both the zones, which are configured as ADNS for the domain A.xyztelco.com. A client when sends a DNS request to resolve A.xyztelcom.com, will eventually get DNS delegation to the address of GSLB providers at zone 1 and 2. A client DNS request will be received by the GSLB provider. The GSLB provider, depending on the domain for which it needs to resolve, will pick up the GSLB virtual server associated with the domain. Depending on the health of the virtual servers being load balanced, DNS request for the domain will be resolved to the public IP associated with the selected virtual server.
15.17.2. Configuring GSLB
To configure a GSLB deployment, you must first configure a standard load balancing setup for each zone. This enables you to balance load across the different servers in each zone in the region. Then on the NetScaler side, configure both NetScaler appliances that you plan to add to each zone as authoritative DNS (ADNS) servers. Next, create a GSLB site for each zone, configure GSLB virtual servers for each site, create GLSB services, and bind the GSLB services to the GSLB virtual servers. Finally, bind the domain to the GSLB virtual servers. The GSLB configurations on the two appliances at the two different zones are identical, although each sites load-balancing configuration is specific to that site.
Perform the following as a cloud administrator. As per the example given above, the administrator of xyztelco is the one who sets up GSLB:
In the cloud.dns.name global parameter, specify the DNS name of your tenant's cloud that make use of the GSLB service.
Configuring a standard load balancing setup.
Configure a GSLB site with site name formed from the domain name details.
Configure a GSLB site with the site name formed from the domain name.
As per the example given above, the site names are A.xyztelco.com and B.xyztelco.com.
Configure a GSLB virtual server.
Configure a GSLB service for each virtual server.
Bind the GSLB services to the GSLB virtual server.
Bind domain name to GSLB virtual server. Domain name is obtained from the domain details.
In each zone that are participating in GSLB, add GSLB-enabled NetScaler device.
As a domain administrator/ user perform the following:
Add a GSLB rule on both the sites.
Assign load balancer rules.
15.17.2.1. Prerequisites and Guidelines
The GSLB functionality is supported both Basic and Advanced zones.
GSLB is added as a new network service.
GSLB service provider can be added to a physical network in a zone.
The admin is allowed to enable or disable GSLB functionality at region level.
The admin is allowed to configure a zone as GSLB capable or enabled.
A zone shall be considered as GSLB capable only if a GSLB service provider is provisioned in the zone.
When users have VMs deployed in multiple availability zones which are GSLB enabled, they can use the GSLB functionality to load balance traffic across the VMs in multiple zones.
The users can use GSLB to load balance across the VMs across zones in a region only if the admin has enabled GSLB in that region.
The users can load balance traffic across the availability zones in the same region or different regions.
The admin can configure DNS name for the entire cloud.
The users can specify an unique name across the cloud for a globally load balanced service. The provided name is used as the domain name under the DNS name associated with the cloud.
The user-provided name along with the admin-provided DNS name is used to produce a globally resolvable FQDN for the globally load balanced service of the user. For example, if the admin has configured xyztelco.com as the DNS name for the cloud, and user specifies 'foo' for the GSLB virtual service, then the FQDN name of the GSLB virtual service is foo.xyztelco.com.
While setting up GSLB, users can select a load balancing method, such as round robin, for using across the zones that are part of GSLB.
The user shall be able to set weight to zone-level virtual server. Weight shall be considered by the load balancing method for distributing the traffic.
The GSLB functionality shall support session persistence, where series of client requests for particular domain name is sent to a virtual server on the same zone.
Statistics is collected from each GSLB virtual server.
15.17.2.2. Enabling GSLB in NetScaler
In each zone, add GSLB-enabled NetScaler device for load balancing.
Log in as administrator to the CloudStack UI.
In the left navigation bar, click Infrastructure.
In Zones, click View More.
Choose the zone you want to work with.
Click the Physical Network tab, then click the name of the physical network.
In the Network Service Providers node of the diagram, click Configure.
You might have to scroll down to see this.
Click NetScaler.
Click Add NetScaler device and provide the following:
For NetScaler:
IP Address: The IP address of the SDX.
Username/Password: The authentication credentials to access the device. CloudStack uses these credentials to access the device.
Type: The type of device that is being added. It could be F5 Big Ip Load Balancer, NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the NetScaler types, see the CloudStack Administration Guide.
Public interface: Interface of device that is configured to be part of the public network.
Private interface: Interface of device that is configured to be part of the private network.
GSLB service: Select this option.
GSLB service Public IP: The public IP address of the NAT translator for a GSLB service that is on a private network.
GSLB service Private IP: The private IP of the GSLB service.
Number of Retries. Number of times to attempt a command on the device before considering the operation failed. Default is 2.
Capacity: The number of networks the device can handle.
Dedicated: When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1.
Click OK.
15.17.2.3. Adding a GSLB Rule
Log in to the CloudStack UI as a domain administrator or user.
In the left navigation pane, click Region.
Select the region for which you want to create a GSLB rule.
In the Details tab, click View GSLB.
Click Add GSLB.
The Add GSLB page is displayed as follows:
Specify the following:
Name: Name for the GSLB rule.
Description: (Optional) A short description of the GSLB rule that can be displayed to users.
GSLB Domain Name: A preferred domain name for the service.
Algorithm: (Optional) The algorithm to use to load balance the traffic across the zones. The options are Round Robin, Least Connection, and Proximity.
Service Type: The transport protocol to use for GSLB. The options are TCP and UDP.
Domain: (Optional) The domain for which you want to create the GSLB rule.
Account: (Optional) The account on which you want to apply the GSLB rule.
Click OK to confirm.
15.17.2.4. Assigning Load Balancing Rules to GSLB
Log in to the CloudStack UI as a domain administrator or user.
In the left navigation pane, click Region.
Select the region for which you want to create a GSLB rule.
In the Details tab, click View GSLB.
Select the desired GSLB.
Click view assigned load balancing.
Click assign more load balancing.
Select the load balancing rule you have created for the zone.
Click OK to confirm.
15.17.3. Known Limitation
Currently, CloudStack does not support orchestration of services across the zones. The notion of services and service providers in region are to be introduced.
The IP ranges for guest network traffic are set on a per-account basis by the user. This allows the users to configure their network in a fashion that will enable VPN linking between their guest network and their clients.
In shared networks in Basic zone and Security Group-enabled Advanced networks, you will have the flexibility to add multiple guest IP ranges from different subnets. You can add or remove one IP range at a time. For more information, see
Section 15.10, “About Multiple IP Ranges”.
15.19. Acquiring a New IP Address
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click the name of the network where you want to work with.
Click View IP Addresses.
Click Acquire New IP.
The Acquire New IP window is displayed.
Specify whether you want cross-zone IP or not.
If you want Portable IP click Yes in the confirmation dialog. If you want a normal Public IP click No.
Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding or static NAT rules.
15.20. Releasing an IP Address
When the last rule for an IP address is removed, you can release that IP address. The IP address still belongs to the VPC; however, it can be picked up for any guest network again.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click the name of the network where you want to work with.
Click View IP Addresses.
Click the IP address you want to release.
Click the Release IP button.
A static NAT rule maps a public IP address to the private IP address of a VM in order to allow Internet traffic into the VM. The public IP address always remains the same, which is why it is called “static” NAT. This section tells how to enable or disable static NAT for a particular IP address.
15.21.1. Enabling or Disabling Static NAT
If port forwarding rules are already in effect for an IP address, you cannot enable static NAT to that IP.
If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click the name of the network where you want to work with.
Click View IP Addresses.
Click the IP address you want to work with.
Click the Static NAT

button.
The button toggles between Enable and Disable, depending on whether static NAT is currently enabled for the IP address.
If you are enabling static NAT, a dialog appears where you can choose the destination VM and click Apply.
15.22. IP Forwarding and Firewalling
By default, all incoming traffic to the public IP address is rejected. All outgoing traffic from the guests is also blocked by default.
To allow incoming traffic, users may set up firewall rules and/or port forwarding rules. For example, you can use a firewall rule to open a range of ports on the public IP address, such as 33 through 44. Then use port forwarding rules to direct traffic from individual ports within that range to specific ports on user VMs. For example, one port forwarding rule could route incoming traffic on the public IP's port 33 to port 100 on one user VM's private IP.
By default, all incoming traffic to the public IP address is rejected by the firewall. To allow external traffic, you can open firewall ports by specifying firewall rules. You can optionally specify one or more CIDRs to filter the source IPs. This is useful when you want to allow only incoming requests from certain IP addresses.
You cannot use firewall rules to open ports for an elastic IP address. When elastic IP is used, outside access is instead controlled through the use of security groups. See
Section 15.15.2, “Adding a Security Group”.
Firewall rules can be created using the Firewall tab in the Management Server UI. This tab is not displayed by default when CloudStack is installed. To display the Firewall tab, the CloudStack administrator must set the global configuration parameter firewall.rule.ui.enabled to "true."
To create a firewall rule:
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
Click the name of the network where you want to work with.
Click View IP Addresses.
Click the IP address you want to work with.
Click the Configuration tab and fill in the following values.
Source CIDR. (Optional) To accept only traffic from IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. Example: 192.168.0.0/22. Leave empty to allow all CIDRs.
Protocol. The communication protocol in use on the opened port(s).
Start Port and End Port. The port(s) you want to open on the firewall. If you are opening a single port, use the same number in both fields
ICMP Type and ICMP Code. Used only if Protocol is set to ICMP. Provide the type and code required by the ICMP protocol to fill out the ICMP header. Refer to ICMP documentation for more details if you are not sure what to enter
Click Add.
15.22.2. Egress Firewall Rules in an Advanced Zone
The egress traffic originates from a private network to a public network, such as the Internet. By default, the egress traffic is blocked in default network offerings, so no outgoing traffic is allowed from a guest network to the Internet. However, you can control the egress traffic in an Advanced zone by creating egress firewall rules. When an egress firewall rule is applied, the traffic specific to the rule is allowed and the remaining traffic is blocked. When all the firewall rules are removed the default policy, Block, is applied.
15.22.2.1. Prerequisites and Guidelines
Consider the following scenarios to apply egress firewall rules:
Egress firewall rules are supported on Juniper SRX and virtual router.
The egress firewall rules are not supported on shared networks.
Allow the egress traffic from specified source CIDR. The Source CIDR is part of guest network CIDR.
Allow the egress traffic with protocol TCP,UDP,ICMP, or ALL.
Allow the egress traffic with protocol and destination port range. The port range is specified for TCP, UDP or for ICMP type and code.
The default policy is Allow for the new network offerings, whereas on upgrade existing network offerings with firewall service providers will have the default egress policy Deny.
15.22.2.2. Configuring an Egress Firewall Rule
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In Select view, choose Guest networks, then click the Guest network you want.
To add an egress rule, click the Egress rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this guest network:
CIDR: (Add by CIDR only) To send traffic only to the IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
Protocol: The networking protocol that VMs uses to send outgoing traffic. The TCP and UDP protocols are typically used for data exchange and end-user communications. The ICMP protocol is typically used to send error messages or network monitoring data.
Start Port, End Port: (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields.
ICMP Type, ICMP Code: (ICMP only) The type of message and error code that are sent.
Click Add.
15.22.2.3. Configuring the Default Egress Policy
The default egress policy for Isolated guest network is configured by using Network offering. Use the create network offering option to determine whether the default policy should be block or allow all the traffic to the public network from a guest network. Use this network offering to create the network. If no policy is specified, by default all the traffic is allowed from the guest network that you create by using this network offering.
You have two options: Allow and Deny.
If you select Allow for a network offering, by default egress traffic is allowed. However, when an egress rule is configured for a guest network, rules are applied to block the specified traffic and rest are allowed. If no egress rules are configured for the network, egress traffic is accepted.
If you select Deny for a network offering, by default egress traffic for the guest network is blocked. However, when an egress rules is configured for a guest network, rules are applied to allow the specified traffic. While implementing a guest network, CloudStack adds the firewall egress rule specific to the default egress policy for the guest network.
This feature is supported only on virtual router and Juniper SRX.
Create a network offering with your desirable default egress policy:
Log in with admin privileges to the CloudStack UI.
In the left navigation bar, click Service Offerings.
In Select Offering, choose Network Offering.
Click Add Network Offering.
In the dialog, make necessary choices, including firewall provider.
In the Default egress policy field, specify the behaviour.
Click OK.
Create an isolated network by using this network offering.
Based on your selection, the network will have the egress public traffic blocked or allowed.
A port forward service is a set of port forwarding rules that define a policy. A port forward service is then applied to one or more guest VMs. The guest VM then has its inbound network access managed according to the policy defined by the port forwarding service. You can optionally specify one or more CIDRs to filter the source IPs. This is useful when you want to allow only incoming requests from certain IP addresses to be forwarded.
A guest VM can be in any number of port forward services. Port forward services can be defined but have no members. If a guest VM is part of more than one network, port forwarding rules will function only if they are defined on the default network
You cannot use port forwarding to open ports for an elastic IP address. When elastic IP is used, outside access is instead controlled through the use of security groups. See Security Groups.
To set up port forwarding:
Log in to the CloudStack UI as an administrator or end user.
If you have not already done so, add a public IP address range to a zone in CloudStack. See Adding a Zone and Pod in the Installation Guide.
Add one or more VM instances to CloudStack.
In the left navigation bar, click Network.
Click the name of the guest network where the VMs are running.
Click the Configuration tab.
In the Port Forwarding node of the diagram, click View All.
Fill in the following:
Public Port. The port to which public traffic will be addressed on the IP address you acquired in the previous step.
Private Port. The port on which the instance is listening for forwarded public traffic.
Protocol. The communication protocol in use between the two ports
Click Add.
The user may choose to associate the same public IP for multiple guests. CloudStack implements a TCP-level load balancer with the following policies.
Round-robin
Least connection
Source IP
This is similar to port forwarding but the destination may be multiple IP addresses.
The Virtual Router provides DNS and DHCP services to the guests. It proxies DNS requests to the DNS server configured on the Availability Zone.
CloudStack account owners can create virtual private networks (VPN) to access their virtual machines. If the guest network is instantiated from a network offering that offers the Remote Access VPN service, the virtual router (based on the System VM) is used to provide the service. CloudStack provides a L2TP-over-IPsec-based remote access VPN service to guest virtual networks. Since each network gets its own virtual router, VPNs are not shared across the networks. VPN clients native to Windows, Mac OS X and iOS can be used to connect to the guest networks. The account owner can create and manage users for their VPN. CloudStack does not use its account database for this purpose but uses a separate table. The VPN user database is shared across all the VPNs created by the account owner. All VPN users get access to all VPNs created by the account owner.
Make sure that not all traffic goes through the VPN. That is, the route installed by the VPN should be only for the guest network and not for all traffic.
15.25.2. Using Remote Access VPN with Windows
The procedure to use VPN varies by Windows version. Generally, the user must edit the VPN properties and make sure that the default route is not the VPN. The following steps are for Windows L2TP clients on Windows Vista. The commands should be similar for other Windows versions.
Log in to the CloudStack UI and click on the source NAT IP for the account. The VPN tab should display the IPsec preshared key. Make a note of this and the source NAT IP. The UI also lists one or more users and their passwords. Choose one of these users, or, if none exists, add a user and password.
On the Windows box, go to Control Panel, then select Network and Sharing center. Click Setup a connection or network.
In the next dialog, select No, create a new connection.
In the next dialog, select Use my Internet Connection (VPN).
In the next dialog, enter the source NAT IP from step
1 and give the connection a name. Check Don't connect now.
In the next dialog, enter the user name and password selected in step
1.
Click Create.
Go back to the Control Panel and click Network Connections to see the new connection. The connection is not active yet.
Right-click the new connection and select Properties. In the Properties dialog, select the Networking tab.
In Type of VPN, choose L2TP IPsec VPN, then click IPsec settings. Select Use preshared key. Enter the preshared key from step
1.
The connection is ready for activation. Go back to Control Panel -> Network Connections and double-click the created connection.
Enter the user name and password from step
1.
15.25.3. Using Remote Access VPN with Mac OS X
First, be sure you've configured the VPN settings in your CloudStack install. This section is only concerned with connecting via Mac OS X to your VPN.
Note, these instructions were written on Mac OS X 10.7.5. They may differ slightly in older or newer releases of Mac OS X.
On your Mac, open System Preferences and click Network.
Make sure Send all traffic over VPN connection is not checked.
If your preferences are locked, you'll need to click the lock in the bottom left-hand corner to make any changes and provide your administrator credentials.
You will need to create a new network entry. Click the plus icon on the bottom left-hand side and you'll see a dialog that says "Select the interface and enter a name for the new service." Select VPN from the Interface drop-down menu, and "L2TP over IPSec" for the VPN Type. Enter whatever you like within the "Service Name" field.
You'll now have a new network interface with the name of whatever you put in the "Service Name" field. For the purposes of this example, we'll assume you've named it "CloudStack." Click on that interface and provide the IP address of the interface for your VPN under the Server Address field, and the user name for your VPN under Account Name.
Click Authentication Settings, and add the user's password under User Authentication and enter the pre-shared IPSec key in the Shared Secret field under Machine Authentication. Click OK.
You may also want to click the "Show VPN status in menu bar" but that's entirely optional.
Now click "Connect" and you will be connected to the CloudStack VPN.
15.25.4. Setting Up a Site-to-Site VPN Connection
A Site-to-Site VPN connection helps you establish a secure connection from an enterprise datacenter to the cloud infrastructure. This allows users to access the guest VMs by establishing a VPN connection to the virtual router of the account from a device in the datacenter of the enterprise. Having this facility eliminates the need to establish VPN connections to individual VMs.
The difference from Remote VPN is that Site-to-site VPNs connects entire networks to each other, for example, connecting a branch office network to a company headquarters network. In a site-to-site VPN, hosts do not have VPN client software; they send and receive normal TCP/IP traffic through a VPN gateway.
The supported endpoints on the remote datacenters are:
In addition to the specific Cisco and Juniper devices listed above, the expectation is that any Cisco or Juniper device running on the supported operating systems are able to establish VPN connections.
To set up a Site-to-Site VPN connection, perform the following:
Create a Virtual Private Cloud (VPC).
Create a VPN Customer Gateway.
Create a VPN gateway for the VPC that you created.
Create VPN connection from the VPC VPN gateway to the customer VPN gateway.
15.25.4.1. Creating and Updating a VPN Customer Gateway
A VPN customer gateway can be connected to only one VPN gateway at a time.
To add a VPN Customer Gateway:
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPN Customer Gateway.
Click Add site-to-site VPN.
Provide the following information:
Name: A unique name for the VPN customer gateway you create.
Gateway: The IP address for the remote gateway.
CIDR list: The guest CIDR list of the remote subnets. Enter a CIDR or a comma-separated list of CIDRs. Ensure that a guest CIDR list is not overlapped with the VPC’s CIDR, or another guest CIDR. The CIDR must be RFC1918-compliant.
IPsec Preshared Key: Preshared keying is a method where the endpoints of the VPN share a secret key. This key value is used to authenticate the customer gateway and the VPC VPN gateway to each other.
The IKE peers (VPN end points) authenticate each other by computing and sending a keyed hash of data that includes the Preshared key. If the receiving peer is able to create the same hash independently by using its Preshared key, it knows that both peers must share the same secret, thus authenticating the customer gateway.
IKE Encryption: The Internet Key Exchange (IKE) policy for phase-1. The supported encryption algorithms are AES128, AES192, AES256, and 3DES. Authentication is accomplished through the Preshared Keys.
The phase-1 is the first phase in the IKE process. In this initial negotiation phase, the two VPN endpoints agree on the methods to be used to provide security for the underlying IP traffic. The phase-1 authenticates the two VPN gateways to each other, by confirming that the remote gateway has a matching Preshared Key.
IKE Hash: The IKE hash for phase-1. The supported hash algorithms are SHA1 and MD5.
IKE DH: A public-key cryptography protocol which allows two parties to establish a shared secret over an insecure communications channel. The 1536-bit Diffie-Hellman group is used within IKE to establish session keys. The supported options are None, Group-5 (1536-bit) and Group-2 (1024-bit).
ESP Encryption: Encapsulating Security Payload (ESP) algorithm within phase-2. The supported encryption algorithms are AES128, AES192, AES256, and 3DES.
The phase-2 is the second phase in the IKE process. The purpose of IKE phase-2 is to negotiate IPSec security associations (SA) to set up the IPSec tunnel. In phase-2, new keying material is extracted from the Diffie-Hellman key exchange in phase-1, to provide session keys to use in protecting the VPN data flow.
ESP Hash: Encapsulating Security Payload (ESP) hash for phase-2. Supported hash algorithms are SHA1 and MD5.
Perfect Forward Secrecy: Perfect Forward Secrecy (or PFS) is the property that ensures that a session key derived from a set of long-term public and private keys will not be compromised. This property enforces a new Diffie-Hellman key exchange. It provides the keying material that has greater key material life and thereby greater resistance to cryptographic attacks. The available options are None, Group-5 (1536-bit) and Group-2 (1024-bit). The security of the key exchanges increase as the DH groups grow larger, as does the time of the exchanges.
When PFS is turned on, for every negotiation of a new phase-2 SA the two gateways must generate a new set of phase-1 keys. This adds an extra layer of protection that PFS adds, which ensures if the phase-2 SA’s have expired, the keys used for new phase-2 SA’s have not been generated from the current phase-1 keying material.
IKE Lifetime (seconds): The phase-1 lifetime of the security association in seconds. Default is 86400 seconds (1 day). Whenever the time expires, a new phase-1 exchange is performed.
ESP Lifetime (seconds): The phase-2 lifetime of the security association in seconds. Default is 3600 seconds (1 hour). Whenever the value is exceeded, a re-key is initiated to provide a new IPsec encryption and authentication session keys.
Dead Peer Detection: A method to detect an unavailable Internet Key Exchange (IKE) peer. Select this option if you want the virtual router to query the liveliness of its IKE peer at regular intervals. It’s recommended to have the same configuration of DPD on both side of VPN connection.
Click OK.
You can update a customer gateway either with no VPN connection, or related VPN connection is in error state.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPN Customer Gateway.
Select the VPN customer gateway you want to work with.
To modify the required parameters, click the Edit VPN Customer Gateway button
To remove the VPN customer gateway, click the Delete VPN Customer Gateway button
Click OK.
15.25.4.2. Creating a VPN gateway for the VPC
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC to which you want to deploy the VMs.
The VPC page is displayed where all the tiers you created are listed in a diagram.
Click the Settings icon.
For each tier, the following options are displayed:
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
Select Site-to-Site VPN.
If you are creating the VPN gateway for the first time, selecting Site-to-Site VPN prompts you to create a VPN gateway.
In the confirmation dialog, click Yes to confirm.
Within a few moments, the VPN gateway is created. You will be prompted to view the details of the VPN gateway you have created. Click Yes to confirm.
The following details are displayed in the VPN Gateway page:
IP Address
Account
Domain
15.25.4.3. Creating a VPN Connection
CloudStack supports creating up to 8 VPN connections.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you create for the account are listed in the page.
Click the Configure button of the VPC to which you want to deploy the VMs.
The VPC page is displayed where all the tiers you created are listed in a diagram.
Click the Settings icon.
For each tier, the following options are displayed:
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
Select Site-to-Site VPN.
The Site-to-Site VPN page is displayed.
From the Select View drop-down, ensure that VPN Connection is selected.
Click Create VPN Connection.
The Create VPN Connection dialog is displayed:
Select the desired customer gateway, then click OK to confirm.
Within a few moments, the VPN Connection is displayed.
The following information on the VPN connection is displayed:
IP Address
Gateway
State
IPSec Preshared Key
IKE Policy
ESP Policy
15.25.4.4. Restarting and Removing a VPN Connection
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC to which you want to deploy the VMs.
The VPC page is displayed where all the tiers you created are listed in a diagram.
Click the Settings icon.
For each tier, the following options are displayed:
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
Select Site-to-Site VPN.
The Site-to-Site VPN page is displayed.
From the Select View drop-down, ensure that VPN Connection is selected.
All the VPN connections you created are displayed.
Select the VPN connection you want to work with.
The Details tab is displayed.
To remove a VPN connection, click the Delete VPN connection button
To restart a VPN connection, click the Reset VPN connection button present in the Details tab.
15.26. About Inter-VLAN Routing (nTier Apps)
Inter-VLAN Routing (nTier Apps) is the capability to route network traffic between VLANs. This feature enables you to build Virtual Private Clouds (VPC), an isolated segment of your cloud, that can hold multi-tier applications. These tiers are deployed on different VLANs that can communicate with each other. You provision VLANs to the tiers your create, and VMs can be deployed on different tiers. The VLANs are connected to a virtual router, which facilitates communication between the VMs. In effect, you can segment VMs by means of VLANs into different networks that can host multi-tier applications, such as Web, Application, or Database. Such segmentation by means of VLANs logically separate application VMs for higher security and lower broadcasts, while remaining physically connected to the same device.
This feature is supported on XenServer, KVM, and VMware hypervisors.
The major advantages are:
The administrator can deploy a set of VLANs and allow users to deploy VMs on these VLANs. A guest VLAN is randomly alloted to an account from a pre-specified set of guest VLANs. All the VMs of a certain tier of an account reside on the guest VLAN allotted to that account.
A VLAN allocated for an account cannot be shared between multiple accounts.
The administrator can allow users create their own VPC and deploy the application. In this scenario, the VMs that belong to the account are deployed on the VLANs allotted to that account.
Both administrators and users can create multiple VPCs. The guest network NIC is plugged to the VPC virtual router when the first VM is deployed in a tier.
The administrator can create the following gateways to send to or receive traffic from the VMs:
Both administrators and users can create various possible destinations-gateway combinations. However, only one gateway of each type can be used in a deployment.
For example:
VLANs and Public Gateway: For example, an application is deployed in the cloud, and the Web application VMs communicate with the Internet.
VLANs, VPN Gateway, and Public Gateway: For example, an application is deployed in the cloud; the Web application VMs communicate with the Internet; and the database VMs communicate with the on-premise devices.
The administrator can define Network Access Control List (ACL) on the virtual router to filter the traffic among the VLANs or between the Internet and a VLAN. You can define ACL based on CIDR, port range, protocol, type code (if ICMP protocol is selected) and Ingress/Egress type.
The following figure shows the possible deployment scenarios of a Inter-VLAN setup:
15.27. Configuring a Virtual Private Cloud
15.27.1. About Virtual Private Clouds
CloudStack Virtual Private Cloud is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. You can launch VMs in the virtual network that can have private addresses in the range of your choice, for example: 10.0.0.0/16. You can define network tiers within your VPC network range, which in turn enables you to group similar kinds of instances based on IP address range.
For example, if a VPC has the private range 10.0.0.0/16, its guest networks can have the network ranges 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, and so on.
A VPC is comprised of the following network components:
VPC: A VPC acts as a container for multiple isolated networks that can communicate with each other via its virtual router.
Network Tiers: Each tier acts as an isolated network with its own VLANs and CIDR list, where you can place groups of resources, such as VMs. The tiers are segmented by means of VLANs. The NIC of each tier acts as its gateway.
Virtual Router: A virtual router is automatically created and started when you create a VPC. The virtual router connect the tiers and direct traffic among the public gateway, the VPN gateways, and the NAT instances. For each tier, a corresponding NIC and IP exist in the virtual router. The virtual router provides DNS and DHCP services through its IP.
Public Gateway: The traffic to and from the Internet routed to the VPC through the public gateway. In a VPC, the public gateway is not exposed to the end user; therefore, static routes are not support for the public gateway.
VPN Gateway: The VPC side of a VPN connection.
Network ACL: Network ACL is a group of Network ACL items. Network ACL items are nothing but numbered rules that are evaluated in order, starting with the lowest numbered rule. These rules determine whether traffic is allowed in or out of any tier associated with the network ACL. For more information, see
Section 15.27.4, “Configuring Network Access Control List”.
In a VPC, the following four basic options of network architectures are present:
VPC with a public gateway only
VPC with public and private gateways
VPC with public and private gateways and site-to-site VPN access
VPC with a private gateway only and site-to-site VPN access
You can connect your VPC to:
The Internet through the public gateway.
The corporate datacenter by using a site-to-site VPN connection through the VPN gateway.
Both the Internet and your corporate datacenter by using both the public gateway and a VPN gateway.
Consider the following before you create a VPC:
A VPC, by default, is created in the enabled state.
A VPC can be created in Advance zone only, and can't belong to more than one zone at a time.
The default number of VPCs an account can create is 20. However, you can change it by using the max.account.vpcs global parameter, which controls the maximum number of VPCs an account is allowed to create.
The default number of tiers an account can create within a VPC is 3. You can configure this number by using the vpc.max.networks parameter.
Each tier should have an unique CIDR in the VPC. Ensure that the tier's CIDR should be within the VPC CIDR range.
A tier belongs to only one VPC.
All network tiers inside the VPC should belong to the same account.
When a VPC is created, by default, a SourceNAT IP is allocated to it. The Source NAT IP is released only when the VPC is removed.
A public IP can be used for only one purpose at a time. If the IP is a sourceNAT, it cannot be used for StaticNAT or port forwarding.
The instances can only have a private IP address that you provision. To communicate with the Internet, enable NAT to an instance that you launch in your VPC.
Only new networks can be added to a VPC. The maximum number of networks per VPC is limited by the value you specify in the vpc.max.networks parameter. The default value is three.
The load balancing service can be supported by only one tier inside the VPC.
If an IP address is assigned to a tier:
That IP can't be used by more than one tier at a time in the VPC. For example, if you have tiers A and B, and a public IP1, you can create a port forwarding rule by using the IP either for A or B, but not for both.
That IP can't be used for StaticNAT, load balancing, or port forwarding rules for another guest network inside the VPC.
Remote access VPN is not supported in VPC networks.
15.27.2. Adding a Virtual Private Cloud
When creating the VPC, you simply provide the zone and a set of IP addresses for the VPC network address space. You specify this set of addresses in the form of a Classless Inter-Domain Routing (CIDR) block.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
Click Add VPC. The Add VPC page is displayed as follows:
Provide the following information:
Name: A short name for the VPC that you are creating.
Description: A brief description of the VPC.
Zone: Choose the zone where you want the VPC to be available.
Super CIDR for Guest Networks: Defines the CIDR range for all the tiers (guest networks) within a VPC. When you create a tier, ensure that its CIDR is within the Super CIDR value you enter. The CIDR must be RFC1918 compliant.
DNS domain for Guest Networks: If you want to assign a special domain name, specify the DNS suffix. This parameter is applied to all the tiers within the VPC. That implies, all the tiers you create in the VPC belong to the same DNS domain. If the parameter is not specified, a DNS domain name is generated automatically.
Public Load Balancer Provider: You have two options: VPC Virtual Router and Netscaler.
Click OK.
Tiers are distinct locations within a VPC that act as isolated networks, which do not have access to other tiers by default. Tiers are set up on different VLANs that can communicate with each other by using a virtual router. Tiers provide inexpensive, low latency network connectivity to other tiers within the VPC.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPC that you have created for the account is listed in the page.
The end users can see their own VPCs, while root and domain admin can see any VPC they are authorized to see.
Click the Configure button of the VPC for which you want to set up tiers.
Click Create network.
The Add new tier dialog is displayed, as follows:
If you have already created tiers, the VPC diagram is displayed. Click Create Tier to add a new tier.
Specify the following:
All the fields are mandatory.
Name: A unique name for the tier you create.
Network Offering: The following default network offerings are listed: Internal LB, DefaultIsolatedNetworkOfferingForVpcNetworksNoLB, DefaultIsolatedNetworkOfferingForVpcNetworks
In a VPC, only one tier can be created by using LB-enabled network offering.
Gateway: The gateway for the tier you create. Ensure that the gateway is within the Super CIDR range that you specified while creating the VPC, and is not overlapped with the CIDR of any existing tier within the VPC.
VLAN: The VLAN ID for the tier that the root admin creates.
This option is only visible if the network offering you selected is VLAN-enabled.
For more information, see the Assigning VLANs to Isolated Networks section in the CloudStack Administration Guide.
Netmask: The netmask for the tier you create.
For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is 255.255.255.0.
Click OK.
Continue with configuring access control list for the tier.
15.27.5. Adding a Private Gateway to a VPC
A private gateway can be added by the root admin only. The VPC private network has 1:1 relationship with the NIC of the physical network. You can configure multiple private gateways to a single VPC. No gateways with duplicated VLAN and IP are allowed in the same data center.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC to which you want to configure load balancing rules.
The VPC page is displayed where all the tiers you created are listed in a diagram.
Click the Settings icon.
The following options are displayed.
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
Select Private Gateways.
The Gateways page is displayed.
Click Add new gateway:
Specify the following:
Physical Network: The physical network you have created in the zone.
IP Address: The IP address associated with the VPC gateway.
Gateway: The gateway through which the traffic is routed to and from the VPC.
Netmask: The netmask associated with the VPC gateway.
VLAN: The VLAN associated with the VPC gateway.
Source NAT: Select this option to enable the source NAT service on the VPC private gateway.
ACL: Controls both ingress and egress traffic on a VPC private gateway. By default, all the traffic is blocked.
The new gateway appears in the list. You can repeat these steps to add more gateway for this VPC.
15.27.5.1. Source NAT on Private Gateway
You might want to deploy multiple VPCs with the same super CIDR and guest tier CIDR. Therefore, multiple guest VMs from different VPCs can have the same IPs to reach a enterprise data center through the private gateway. In such cases, a NAT service need to be configured on the private gateway to avoid IP conflicts. If Source NAT is enabled, the guest VMs in VPC reaches the enterprise network via private gateway IP address by using the NAT service.
The Source NAT service on a private gateway can be enabled while adding the private gateway. On deletion of a private gateway, source NAT rules specific to the private gateway are deleted.
To enable source NAT on existing private gateways, delete them and create afresh with source NAT.
15.27.5.2. ACL on Private Gateway
The traffic on the VPC private gateway is controlled by creating both ingress and egress network ACL rules. The ACLs contains both allow and deny rules. As per the rule, all the ingress traffic to the private gateway interface and all the egress traffic out from the private gateway interface are blocked.
You can change this default behaviour while creating a private gateway. Alternatively, you can do the following:
In a VPC, identify the Private Gateway you want to work with.
In the Private Gateway page, do either of the following:
Use the Quickview. See
3.
Use the Details tab. See
4 through .
In the Quickview of the selected Private Gateway, click Replace ACL, select the ACL rule, then click OK
Click the IP address of the Private Gateway you want to work with.
In the Detail tab, click the Replace ACL button.
The Replace ACL dialog is displayed.
select the ACL rule, then click OK.
Wait for few seconds. You can see that the new ACL rule is displayed in the Details page.
15.27.5.3. Creating a Static Route
CloudStack enables you to specify routing for the VPN connection you create. You can enter one or CIDR addresses to indicate which traffic is to be routed back to the gateway.
In a VPC, identify the Private Gateway you want to work with.
In the Private Gateway page, click the IP address of the Private Gateway you want to work with.
Select the Static Routes tab.
Specify the CIDR of destination network.
Click Add.
Wait for few seconds until the new route is created.
15.27.5.4. Blacklisting Routes
CloudStack enables you to block a list of routes so that they are not assigned to any of the VPC private gateways. Specify the list of routes that you want to blacklist in the blacklisted.routes
global parameter. Note that the parameter update affects only new static route creations. If you block an existing static route, it remains intact and continue functioning. You cannot add a static route if the route is blacklisted for the zone.
15.27.6. Deploying VMs to the Tier
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC to which you want to deploy the VMs.
The VPC page is displayed where all the tiers you have created are listed.
Click Virtual Machines tab of the tier to which you want to add a VM.
The Add Instance page is displayed.
Follow the on-screen instruction to add an instance. For information on adding an instance, see the Installation Guide.
15.27.7. Deploying VMs to VPC Tier and Shared Networks
CloudStack allows you deploy VMs on a VPC tier and one or more shared networks. With this feature, VMs deployed in a multi-tier application can receive monitoring services via a shared network provided by a service provider.
Log in to the CloudStack UI as an administrator.
In the left navigation, choose Instances.
Click Add Instance.
Select a zone.
Select a template or ISO, then follow the steps in the wizard.
Ensure that the hardware you have allows starting the selected service offering.
Under Networks, select the desired networks for the VM you are launching.
You can deploy a VM to a VPC tier and multiple shared networks.
Click Next, review the configuration and click Launch.
Your VM will be deployed to the selected VPC tier and shared network.
15.27.8. Acquiring a New IP Address for a VPC
When you acquire an IP address, all IP addresses are allocated to VPC, not to the guest networks within the VPC. The IPs are associated to the guest network only when the first port-forwarding, load balancing, or Static NAT rule is created for the IP or the network. IP can't be associated to more than one network at a time.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC to which you want to deploy the VMs.
The VPC page is displayed where all the tiers you created are listed in a diagram.
The following options are displayed.
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
Select IP Addresses.
The Public IP Addresses page is displayed.
Click Acquire New IP, and click Yes in the confirmation dialog.
You are prompted for confirmation because, typically, IP addresses are a limited resource. Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding, load balancing, and static NAT rules.
15.27.9. Releasing an IP Address Alloted to a VPC
The IP address is a limited resource. If you no longer need a particular IP, you can disassociate it from its VPC and return it to the pool of available addresses. An IP address can be released from its tier, only when all the networking ( port forwarding, load balancing, or StaticNAT ) rules are removed for this IP address. The released IP address will still belongs to the same VPC.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC whose IP you want to release.
The VPC page is displayed where all the tiers you created are listed in a diagram.
The following options are displayed.
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
Select Public IP Addresses.
The IP Addresses page is displayed.
Click the IP you want to release.
In the Details tab, click the Release IP button
15.27.10. Enabling or Disabling Static NAT on a VPC
A static NAT rule maps a public IP address to the private IP address of a VM in a VPC to allow Internet traffic to it. This section tells how to enable or disable static NAT for a particular IP address in a VPC.
If port forwarding rules are already in effect for an IP address, you cannot enable static NAT to that IP.
If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC to which you want to deploy the VMs.
The VPC page is displayed where all the tiers you created are listed in a diagram.
For each tier, the following options are displayed.
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
In the Router node, select Public IP Addresses.
The IP Addresses page is displayed.
Click the IP you want to work with.
In the Details tab,click the Static NAT button.

The button toggles between Enable and Disable, depending on whether static NAT is currently enabled for the IP address.
If you are enabling static NAT, a dialog appears as follows:
Select the tier and the destination VM, then click Apply.
15.27.11. Adding Load Balancing Rules on a VPC
In a VPC, you can configure two types of load balancing—external LB and internal LB. External LB is nothing but a LB rule created to redirect the traffic received at a public IP of the VPC virtual router. The traffic is load balanced within a tier based on your configuration. Citrix NetScaler and VPC virtual router are supported for external LB. When you use internal LB service, traffic received at a tier is load balanced across different VMs within that tier. For example, traffic reached at Web tier is redirected to another VM in that tier. External load balancing devices are not supported for internal LB. The service is provided by a internal LB VM configured on the target tier.
15.27.11.1. Load Balancing Within a Tier (External LB)
A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs that belong to a network tier that provides load balancing service in a VPC. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs within a tier.
15.27.11.1.1. Enabling NetScaler as the LB Provider on a VPC Tier
Add and enable Netscaler VPX in dedicated mode.
Netscaler can be used in a VPC environment only if it is in dedicated mode.
Create a VPC with Netscaler as the Public LB provider.
For the VPC, acquire an IP.
15.27.11.1.2. Creating a Network Offering for External LB
To have external LB support on VPC, create a network offering as follows:
Log in to the CloudStack UI as a user or admin.
From the Select Offering drop-down, choose Network Offering.
Click Add Network Offering.
In the dialog, make the following choices:
Name: Any desired name for the network offering.
Description: A short description of the offering that can be displayed to users.
Network Rate: Allowed data transfer rate in MB per second.
Traffic Type: The type of network traffic that will be carried on the network.
Guest Type: Choose whether the guest network is isolated or shared.
Persistent: Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network.
VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see
Section 15.27.1, “About Virtual Private Clouds”.
Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used.
Supported Services: Select Load Balancer. Use Netscaler or VpcVirtualRouter.
Load Balancer Type: Select Public LB from the drop-down.
LB Isolation: Select Dedicated if Netscaler is used as the external LB provider.
System Offering: Choose the system service offering that you want virtual routers to use in this network.
Conserve mode: Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network.
Click OK and the network offering is created.
15.27.11.1.3. Creating an External LB Rule
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC, for which you want to configure load balancing rules.
The VPC page is displayed where all the tiers you created listed in a diagram.
For each tier, the following options are displayed:
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
In the Router node, select Public IP Addresses.
The IP Addresses page is displayed.
Click the IP address for which you want to create the rule, then click the Configuration tab.
In the Load Balancing node of the diagram, click View All.
Select the tier to which you want to apply the rule.
Specify the following:
Name: A name for the load balancer rule.
Public Port: The port that receives the incoming traffic to be balanced.
Private Port: The port that the VMs will use to receive the traffic.
Algorithm. Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms:
Round-robin
Least connections
Source
Stickiness. (Optional) Click Configure and choose the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules.
Add VMs: Click Add VMs, then select two or more VMs that will divide the load of incoming traffic, and click Apply.
The new load balancing rule appears in the list. You can repeat these steps to add more load balancing rules for this IP address.
15.27.11.2. Load Balancing Across Tiers
CloudStack supports sharing workload across different tiers within your VPC. Assume that multiple tiers are set up in your environment, such as Web tier and Application tier. Traffic to each tier is balanced on the VPC virtual router on the public side, as explained in
Section 15.27.11, “Adding Load Balancing Rules on a VPC”. If you want the traffic coming from the Web tier to the Application tier to be balanced, use the internal load balancing feature offered by CloudStack.
15.27.11.2.1. How Does Internal LB Work in VPC?
In this figure, a public LB rule is created for the public IP 72.52.125.10 with public port 80 and private port 81. The LB rule, created on the VPC virtual router, is applied on the traffic coming from the Internet to the VMs on the Web tier. On the Application tier two internal load balancing rules are created. An internal LB rule for the guest IP 10.10.10.4 with load balancer port 23 and instance port 25 is configured on the VM, InternalLBVM1. Another internal LB rule for the guest IP 10.10.10.4 with load balancer port 45 and instance port 46 is configured on the VM, InternalLBVM1. Another internal LB rule for the guest IP 10.10.10.6, with load balancer port 23 and instance port 25 is configured on the VM, InternalLBVM2.
Internal LB and Public LB are mutually exclusive on a tier. If the tier has LB on the public side, then it can't have the Internal LB.
Internal LB is supported just on VPC networks in CloudStack 4.2 release.
Only Internal LB VM can act as the Internal LB provider in CloudStack 4.2 release.
Network upgrade is not supported from the network offering with Internal LB to the network offering with Public LB.
Multiple tiers can have internal LB support in a VPC.
Only one tier can have Public LB support in a VPC.
15.27.11.2.3. Enabling Internal LB on a VPC Tier
15.27.11.2.4. Creating a Network Offering for Internal LB
To have internal LB support on VPC, either use the default offering, DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB, or create a network offering as follows:
Log in to the CloudStack UI as a user or admin.
From the Select Offering drop-down, choose Network Offering.
Click Add Network Offering.
In the dialog, make the following choices:
Name: Any desired name for the network offering.
Description: A short description of the offering that can be displayed to users.
Network Rate: Allowed data transfer rate in MB per second.
Traffic Type: The type of network traffic that will be carried on the network.
Guest Type: Choose whether the guest network is isolated or shared.
Persistent: Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network.
VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see
Section 15.27.1, “About Virtual Private Clouds”.
Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used.
Supported Services: Select Load Balancer. Select InternalLbVM
from the provider list.
Load Balancer Type: Select Internal LB from the drop-down.
System Offering: Choose the system service offering that you want virtual routers to use in this network.
Conserve mode: Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network.
Click OK and the network offering is created.
15.27.11.2.5. Creating an Internal LB Rule
When you create the Internal LB rule and applies to a VM, an Internal LB VM, which is responsible for load balancing, is created.
You can view the created Internal LB VM in the Instances page if you navigate to Infrastructure > Zones > <zone_ name> > <physical_network_name> > Network Service Providers > Internal LB VM. You can manage the Internal LB VMs as and when required from the location.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Locate the VPC for which you want to configure internal LB, then click Configure.
The VPC page is displayed where all the tiers you created listed in a diagram.
Locate the Tier for which you want to configure an internal LB rule, click Internal LB.
In the Internal LB page, click Add Internal LB.
In the dialog, specify the following:
Name: A name for the load balancer rule.
Description: A short description of the rule that can be displayed to users.
Source IP Address: (Optional) The source IP from which traffic originates. The IP is acquired from the CIDR of that particular tier on which you want to create the Internal LB rule. If not specified, the IP address is automatically allocated from the network CIDR.
For every Source IP, a new Internal LB VM is created for load balancing.
Source Port: The port associated with the source IP. Traffic on this port is load balanced.
Instance Port: The port of the internal LB VM.
Algorithm. Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms:
Round-robin
Least connections
Source
15.27.12. Adding a Port Forwarding Rule on a VPC
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Click the Configure button of the VPC to which you want to deploy the VMs.
The VPC page is displayed where all the tiers you created are listed in a diagram.
For each tier, the following options are displayed:
Internal LB
Public LB IP
Static NAT
Virtual Machines
CIDR
The following router information is displayed:
Private Gateways
Public IP Addresses
Site-to-Site VPNs
Network ACL Lists
In the Router node, select Public IP Addresses.
The IP Addresses page is displayed.
Click the IP address for which you want to create the rule, then click the Configuration tab.
In the Port Forwarding node of the diagram, click View All.
Select the tier to which you want to apply the rule.
Specify the following:
Public Port: The port to which public traffic will be addressed on the IP address you acquired in the previous step.
Private Port: The port on which the instance is listening for forwarded public traffic.
Protocol: The communication protocol in use between the two ports.
Add VM: Click Add VM. Select the name of the instance to which this rule applies, and click Apply.
You can test the rule by opening an SSH session to the instance.
You can remove a tier from a VPC. A removed tier cannot be revoked. When a tier is removed, only the resources of the tier are expunged. All the network rules (port forwarding, load balancing and staticNAT) and the IP addresses associated to the tier are removed. The IP address still be belonging to the same VPC.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPC that you have created for the account is listed in the page.
Click the Configure button of the VPC for which you want to set up tiers.
The Configure VPC page is displayed. Locate the tier you want to work with.
Select the tier you want to remove.
In the Network Details tab, click the Delete Network button.
Click Yes to confirm. Wait for some time for the tier to be removed.
15.27.14. Editing, Restarting, and Removing a Virtual Private Cloud
Ensure that all the tiers are removed before you remove a VPC.
Log in to the CloudStack UI as an administrator or end user.
In the left navigation, choose Network.
In the Select view, select VPC.
All the VPCs that you have created for the account is listed in the page.
Select the VPC you want to work with.
In the Details tab, click the Remove VPC button
You can remove the VPC by also using the remove button in the Quick View.
You can edit the name and description of a VPC. To do that, select the VPC, then click the Edit button.
To restart a VPC, select the VPC, then click the Restart button.
15.28. Persistent Networks
The network that you can provision without having to deploy any VMs on it is called a persistent network. A persistent network can be part of a VPC or a non-VPC environment.
When you create other types of network, a network is only a database entry until the first VM is created on that network. When the first VM is created, a VLAN ID is assigned and the network is provisioned. Also, when the last VM is destroyed, the VLAN ID is released and the network is no longer available. With the addition of persistent network, you will have the ability to create a network in CloudStack in which physical devices can be deployed without having to run any VMs. Additionally, you can deploy physical devices on that network.
One of the advantages of having a persistent network is that you can create a VPC with a tier consisting of only physical devices. For example, you might create a VPC for a three-tier application, deploy VMs for Web and Application tier, and use physical machines for the Database tier. Another use case is that if you are providing services by using physical hardware, you can define the network as persistent and therefore even if all its VMs are destroyed the services will not be discontinued.
15.28.1. Persistent Network Considerations
Persistent network is designed for isolated networks.
All default network offerings are non-persistent.
A network offering cannot be editable because changing it affects the behavior of the existing networks that were created using this network offering.
When you create a guest network, the network offering that you select defines the network persistence. This in turn depends on whether persistent network is enabled in the selected network offering.
An existing network can be made persistent by changing its network offering to an offering that has the Persistent option enabled. While setting this property, even if the network has no running VMs, the network is provisioned.
An existing network can be made non-persistent by changing its network offering to an offering that has the Persistent option disabled. If the network has no running VMs, during the next network garbage collection run the network is shut down.
When the last VM on a network is destroyed, the network garbage collector checks if the network offering associated with the network is persistent, and shuts down the network only if it is non-persistent.
15.28.2. Creating a Persistent Guest Network
To create a persistent network, perform the following:
Create a network offering with the Persistent option enabled.
See the Administration Guide.
Select Network from the left navigation pane.
Select the guest network that you want to offer this network service to.
Click the Edit button.
From the Network Offering drop-down, select the persistent network offering you have just created.
Click OK.