.. _install-on-rhel-with-gpus: Install on RHEL with GPUs ------------------------- This section describes how to install and start the Driverless AI Docker image on RHEL systems with GPUs. Note that the provided nvidia-docker rpm is for x86_64 machines. nvidia-docker has limited support for ppc64le machines. More information is available `here `__. **Note**: As of this writing, Driverless AI has only been tested on RHEL version 7.4. Open a Terminal and ssh to the machine that will run Driverless AI. Once you are logged in, perform the following steps. 1. Retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/. 2. Install and start Docker EE on RHEL (if not already installed). Follow the instructions on https://docs.docker.com/engine/installation/linux/docker-ee/rhel/. Alternatively, you can run on Docker CE, which works even though it’s not officially supported. :: sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum makecache fast sudo yum -y install docker-ce sudo systemctl start docker 3. Install nvidia-docker and the nvidia-docker plugin on RHEL (if not already installed): :: # Install nvidia-docker and nvidia-docker-plugin wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm sudo rpm -i /tmp/nvidia-docker*.rpm && rm /tmp/nvidia-docker*.rpm sudo systemctl start nvidia-docker **Note**: If you would like the nvidia-docker service to automatically start when the server is rebooted then run the following command. If you do not run this commend, you will have to remember to start the nvidia-docker service manually; otherwise the GPUs will not appear as available. :: sudo systemctl enable nvidia-docker Alternatively, if you have installed Docker CE above you can install nvidia-docker with: :: curl -s -L https://nvidia.github.io/nvidia-docker/centos7/x86_64/nvidia-docker.repo | \ sudo tee /etc/yum.repos.d/nvidia-docker.repo sudo yum install nvidia-docker2 4. Verify that the NVIDIA driver is up and running. If the driver is not up and running, log on to http://www.nvidia.com/Download/index.aspx?lang=en-us to get the latest NVIDIA Tesla V/P/K series driver. :: nvidia-docker run --rm nvidia/cuda nvidia-smi 5. Load the Driverless AI Docker image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.21). :: # Load the Driverless AI docker image docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz 6. Set up the data, log, and license directories on the host machine: :: # Set up the data, log, license, and tmp directories on the host machine mkdir data mkdir log mkdir license mkdir tmp 7. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container. 8. Start the Driverless AI Docker image with nvidia-docker: :: # Start the Driverless AI Docker image nvidia-docker run \ --rm \ -u `id -u`:`id -g` \ -p 12345:12345 \ -p 54321:54321 \ -p 8888:8888 \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ opsh2oai/h2oai-runtime Driverless AI will begin running:: --------------------------------- Welcome to H2O.ai's Driverless AI --------------------------------- version: X.Y.Z - Put data in the volume mounted at /data - Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS - Connect to Driverless AI on port 12345 inside the container - Connect to Jupyter notebook on port 8888 inside the container 9. Connect to Driverless AI with your browser at http://Your-Driverless-AI-Host-Machine:12345. .. _install-on-rhel-cpus-only: Install on RHEL with CPUs ------------------------- This section describes how to install and start the Driverless AI Docker image on RHEL. Note that this uses Docker EE and not NVIDIA Docker. GPU support will not be available. **Note**: As of this writing, Driverless AI has only been tested on RHEL version 7.4. 1. Install and start Docker EE on RHEL (if not already installed). Follow the instructions on https://docs.docker.com/engine/installation/linux/docker-ee/rhel/. 2. On the machine that is running Docker EE, retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/. Alternatively, you can run on Docker CE, which works even though it’s not officially supported. :: sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum makecache fast sudo yum -y install docker-ce sudo systemctl start docker 3. Load the Driverless AI Docker image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.21): :: # Load the Driverless AI Docker image docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz 4. Set up the data, log, license, and tmp directories. :: $ mkdir data $ mkdir log $ mkdir license $ mkdir tmp 5. Copy data into the **data** directory on the host. The data will be visible inside the Docker container at **//data**. 6. Start the Driverless AI Docker image with docker. GPU support will not be available. :: $ docker run \ --rm \ -u `id -u`:`id -g` \ -p 12345:12345 \ -p 54321:54321 \ -p 8888:8888 \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ opsh2oai/h2oai-runtime 7. Connect to Driverless AI with your browser at http://Your-Driverless-AI-Host-Machine:12345.