Installation Requirements

This section describes the installation requirements for Anaconda Enterprise.

Anaconda Enterprise can be installed on one to four nodes during the initial installation. After the initial installation, you can add or remove nodes from the Anaconda Enterprise cluster any time.

A rule of thumb for each project session or deployment is 1 CPU, 1 GB of RAM and 5 GB of disk space.

For more information about sizing for a particular component, see the following minimum requirements:

Hardware requirements

Minimum and recommended specs of the master and worker nodes:

Master node Minimum Recommended
CPU (cores) 16 16
RAM (GB) 32 32
Disk space in /opt/anaconda (GB) 100 500*
Disk space in /var/lib/gravity (GB) 100 100
Disk space in /tmp or $TMPDIR (GB) 30 30
Worker nodes Minimum Recommended
CPU (cores) 4 16
RAM (GB) 16 32
Disk space in /var/lib/gravity (GB) 100 100
Disk space in /tmp or $TMPDIR (GB) 30 30

*Note that the recommended disk space in /opt/anaconda includes project and package storage (including mirrored packages).

*Note that currently /opt/anaconda must be a supported filesystem such as ext4 or xfs and cannot be an NFS mountpoint. Subdirectories of /opt/anaconda may be mounted through NFS.

*If you are installing Anaconda Enterprise on an xfs filesystem, then it needs to support d_type to work properly. XFS filesystems do not support d_type if they have been formatted with the -n ftype=0 option. Before installing Anaconda Enterprise, recreate the filesystem using the correct parameter for XFS, or use another supported filesystem.

Operating system requirements

Linux versions:

  • RHEL/CentOS 7.2 through 7.4

  • Ubuntu 16.04

  • Hosted vSphere such as Rackspace or OVH

  • SUSE 12 SP2 - 12 SP3

    NOTE: On SUSE set DefaultTasksMax=infinity in /etc/systemd/system.conf.

Browser requirements

  • Edge 14+
  • Internet Explorer 11+
  • Chrome 39+
  • Firefox 49+
  • Safari 10+

The minimum browser screen size for using the platform is 800 pixels wide and 600 pixels high.

Network requirements

The following network ports for Anaconda Enterprise are externally accessible:

  • 80 - Deployed apps
  • 443 - Deployed apps
  • 8080 - Anaconda Enterprise
  • 30071 - Documentation service
  • 30080 - Authentication service
  • 30081 - Deployment service
  • 30082 - Authentication API service
  • 30085 - Git Proxy service
  • 30086 - Storage service
  • 30087 - Object storage service
  • 30088 - Git storage service
  • 30089 - Repository service
  • 30090 - UI service
  • 30091 - Authentication escrow service
  • 30095 - Spaces service
  • 32009 - Operations center
  • 30000 through 32767 - User-deployed applications
  • 61009 - Install wizard (only used during cluster installation)

The following network ports for Anaconda Enterprise must be open internally, between cluster nodes:

  • 53 - Internal cluster DNS
  • 2379 - Etcd server communication
  • 2380 - Etcd server communication
  • 3008 - Internal Anaconda Enterprise service
  • 3009 - Internal Anaconda Enterprise service
  • 3010 - Internal Anaconda Enterprise service
  • 3011 - Internal Anaconda Enterprise service
  • 3012 - Internal RPC agent
  • 3022 - Teleport internal SSH control panel
  • 3023 - Teleport internal SSH control panel
  • 3024 - Teleport internal SSH control panel
  • 3025 - Teleport internal SSH control panel
  • 3080 - Teleport web UI
  • 4001 - Etcd server communication
  • 5000 - Docker registry
  • 6443 - Kubernetes API server
  • 7001 - Etcd server communication
  • 7373 - Peer-to-peer health check
  • 7496 - Peer-to-peer health check
  • 8472 - VXLAN (Overlay network)
  • 10248 - Kubernetes components
  • 10249 - Kubernetes components
  • 10250 - Kubernetes components
  • 10255 - Kubernetes components

The following domains are required for package mirroring:

  • repo.continuum.io
  • anaconda.org
  • conda.anaconda.org
  • binstar-cio-packages-prod.s3.amazonaws.com

IPv4 forwarding on servers is required for internal load balancing and must be turned on. Anaconda Enterprise performs pre-flight checks and only allows installation on nodes that have the required kernel modules and other correct configuration.

To enable IPv4 forwarding, run:

sysctl -w net.ipv4.ip_forward=1

To persist this setting on boot, run:

echo -e "# Enable IPv4 forwarding\nnet.ipv4.ip_forward=1" >> /etc/sysctl.d/99-ipv4_forward.conf

Kernel module requirements

The following kernel modules are required for Kubernetes cluster to properly function.

br_netfilter module

The bridge netfilter kernel module is required for Kubernetes iptables-based proxy to work correctly.

The bridge kernel module commands are different for different versions of CentOS.

To find your operating system version run cat /etc/*release* or lsb-release -a.

On RHEL/CentOS 7.2 the bridge netfilter module name is bridge and on other operating systems and other versions of CentOS the module name is br_netfilter.

To check if the module is loaded run:

# For RHEL/CentOS 7.2
lsmod | grep bridge

# For all other supported platforms
lsmod | grep br_netfilter

If the above commands did not produce any result, then the module is not loaded. Run the following command to load the module:

# For RHEL/CentOS 7.2
modprobe bridge

# For all other supported platforms
modprobe br_netfilter

Now run:

sysctl -w net.bridge.bridge-nf-call-iptables=1
sysctl -w net.bridge.bridge-nf-call-ip6tables=1

To persist this setting on boot, run:

echo "# Enable bridge module" >> /etc/sysctl.d/99-bridge.conf
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.d/99-bridge.conf
echo "net.bridge.bridge-nf-call-ip6tables=1" >> /etc/sysctl.d/99-bridge.conf

overlay module

The overlay kernel module is required to use overlay or overlay2 Docker storage driver.

To check that overlay module is loaded:

lsmod | grep overlay

If the above command did not produce any result, then the module is not loaded. Use the following command to load the module:

modprobe overlay

ebtables module

The ebtables kernel module is required to allow a service to communicate back to itself via internal load balancing when necessary.

To check that ebtables module is loaded:

lsmod | grep ebtables

If the above command did not produce any result, then the module is not loaded. Use the following command to load the module:

modprobe ebtables

NOTE: During installation, the Anaconda Enterprise installer alerts you if any of these modules are not loaded.

NOTE: If your system does not load modules at boot, add the following to ensure they are loaded in case the machine gets rebooted:

sudo bash -c "echo 'overlay' > /etc/modules-load.d/overlay.conf"
sudo bash -c "echo 'br_netfilter' > /etc/modules-load.d/netfilter.conf"
sudo bash -c "echo 'ebtables' > /etc/modules-load.d/ebtables.conf"

Mount Settings

Many linux distributions include the kernel setting fs.may_detach_mounts = 0. This can cause conflicts with the docker daemon and Kubernetes will show pods as stuck in the terminating state if docker is unable to clean up one of the underlying containers.

If the installed kernel exposes the option fs.may_detach_mounts, we recommend always setting this value to 1:

sudo sysctl -w fs.may_detach_mounts=1
sudo bash -c "echo 'fs.may_detach_mounts = 1' >> /etc/sysctl.d/10-may_detach_mounts.conf"

Security requirements

For CentOS and RHEL:

  • Disable SELinux on all of the cluster nodes.

  • Various tools may be used to configure firewalls and open required ports, including iptables, firewall-cmd, susefirewall2, and others.

    Make sure that the firewall is permanently set to keep the required ports open, and will save these settings across reboots. Then restart the firewall to load these settings immediately.

  • Sudo access.

IOPS requirements (cloud installation)

Master and worker nodes require a minimum IOPS of 3000 IOPS. Fewer IOPS than 3000 will fail.

Verification of system requirements

Storage requirements


To check your available disk space, use the built-in Linux df utility with the -h parameter for human readable format:

df -h /var/lib/gravity

df -h /opt/anaconda

df -h /tmp
# or
df -h $TMPDIR

Memory requirements

To show the free memory size in GB, run:

free -g

CPU requirements

To check the number of cores, run:

nproc