export GOVC_URL='vCenter IP OR FQDN'
export GOVC_USERNAME='vCenter User'
export GOVC_PASSWORD='vCenter Password'
export GOVC_INSECURE=1
OpenShift Origin can be configured to access VMWare vSphere VMDK Volumes, including using VMWare vSphere VMDK Volumes as persistent storage for application data.
The vSphere Cloud Provider allows using vSphere managed storage within OpenShift Origin and supports:
Volumes,
Persistent Volumes, and
Storage Classes and provisioning of volumes.
To enable VMWare vSphere cloud provider for OpenShift Origin:
. Create a VM folder and move OpenShift Origin Node VMs to this folder.
Verify that the Node VM names complies with the regex [a-z](()?[0-9a-z])?(\.[a-z0-9](([-0-9a-z])?[0-9a-z])?)*
.
VM Names can not:
|
Set the disk.EnableUUID
parameter to TRUE
for each Node VM. This ensures that the VMDK always presents a consistent UUID to the VM, allowing the disk to be mounted properly. For every virtual machine node that will be participating in the cluster, follow the steps below using the GOVC tool:
Set up the GOVC environment:
export GOVC_URL='vCenter IP OR FQDN'
export GOVC_USERNAME='vCenter User'
export GOVC_PASSWORD='vCenter Password'
export GOVC_INSECURE=1
Find the Node VM paths:
govc ls /datacenter/vm/<vm-folder-name>
Set disk.EnableUUID to true for all VMs:
govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
If OpenShift Origin node VMs are created from a template VM, then
|
Create and assign roles to the vSphere Cloud Provider user and vSphere entities. vSphere Cloud Provider requires the following privileges to interact with vCenter. See the vSphere Documentation Center for steps to create a custom role, user and role assignment.
Roles | Privileges | Entities | Propagate to Children |
---|---|---|---|
manage-k8s-node-vms |
Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete |
Cluster, Hosts, VM Folder |
Yes |
manage-k8s-volumes |
Datastore.AllocateSpace Datastore.FileManagement System.Anonymous System.Read System.View |
Datastore |
No |
k8s-system-read-and-spbm-profile-view |
StorageProfile.View System.Anonymous System.Read System.View |
vCenter |
No |
ReadOnly |
System.Anonymous System.Read System.View |
Datacenter, Datastore Cluster, Datastore Storage Folder |
No |
After enabling the vSphere Cloud Provider, Node names are set to the VM names from the vCenter Inventory. |
Configuring OpenShift Origin for VMWare vSphere requires the /etc/origin/cloudprovider/vsphere.conf file, on each node host.
If the file does not exist, create it, and add the following:
[Global] user = "username" (1) password = "password" (2) server = "10.10.0.2" (3) port = "443" (4) insecure-flag = "1" (5) datacenter = "datacenter-name" (6) datastore = "datastore-name" (7) working-dir = "vm-folder-path" (8) vm-uuid = "vm-uuid" (10) [Disk] scsicontrollertype = pvscsi
1 | vCenter username for the vSphere cloud provider. |
2 | vCenter password for the specified user. |
3 | IP Address or FQDN for the vCenter server. |
4 | (Optional) Port number for the vCenter server. Defaults to port 443 . |
5 | Set to 1 if the vCenter uses a self-signed cert. |
6 | Name of the data center on which Node VMs are deployed. |
7 | Name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. If datastore is located in a storage folder or datastore is a member of datastore cluster, specify the full datastore path. Verify that vSphere Cloud Provider user has the read privilege set on the datastore cluster or storage folder to be able to find datastore. |
8 | (Optional) The vCenter VM folder path in which the node VMs are located. It can be set to an empty path(working-dir = "" ), if Node VMs are located in the root VM folder. |
9 | (Optional) VM Instance UUID of the Node VM. It can be set to empty (vm-uuid = "" ). If this is set to empty, this is retrieved from /sys/class/dmi/id/product_serial file on virtual machine (requires root access). |
Edit or
create
the master configuration file on all masters
(/etc/origin/master/master-config.yaml by default) and update the contents
of the apiServerArguments
and controllerArguments
sections with the
following:
kubernetesMasterConfig:
admissionConfig:
pluginConfig:
{}
apiServerArguments:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
controllerArguments:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/. |
Edit or
create
the node configuration file on all nodes (/etc/origin/node/node-config.yaml
by default) and update the contents of the kubeletArguments
section:
kubeletArguments:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, node-config.yaml must be in /etc/origin/node rather than /etc/. |
Start or restart OpenShift Origin services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Origin services:
# systemctl restart origin-master-api origin-master-controllers # systemctl restart origin-node
Switching from not using a cloud provider to using a cloud provider produces an
error message. Adding the cloud provider tries to delete the node because the
node switches from using the hostname as the externalID
(which would have
been the case when no cloud provider was being used) to using the AWS
instance-id
(which is what the AWS cloud provider specifies). To resolve
this issue:
Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OpenShift Origin service.
# systemctl restart origin-node
Add back any labels on each node that you previously had.