kind: StorageClass (1)
apiVersion: storage.k8s.io/v1 (2)
metadata:
name: foo (3)
annotations: (4)
...
provisioner: kubernetes.io/plug-in-type (5)
parameters: (6)
param1: value
...
paramN: value
The StorageClass resource object describes and classifies storage that can be
requested, as well as provides a means for passing parameters for
dynamically provisioned storage on demand. StorageClass objects can also serve as
a management mechanism for controlling different levels of storage and access
to the storage. Cluster Administrators (cluster-admin
) or Storage
Administrators (storage-admin
) define and create the StorageClass objects
that users can request without needing any intimate knowledge about the
underlying storage volume sources.
The OpenShift Origin persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in OpenShift Origin. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs.
OpenShift Origin provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:
Storage Type | Provisioner Plug-in Name | Required Configuration | Notes |
---|---|---|---|
OpenStack Cinder |
|
||
AWS Elastic Block Store (EBS) |
|
For dynamic provisioning when using multiple clusters in different zones, tag each
node with |
|
GCE Persistent Disk (gcePD) |
|
In multi-zone configurations, it is advisable to run one Openshift cluster per GCE project to avoid PVs from getting created in zones where no node from current cluster exists. |
|
GlusterFS |
|
Container Native Storage (CNS) utilizes Heketi to manage Gluster Storage. |
|
Ceph RBD |
|
||
Trident from NetApp |
|
Storage orchestrator for NetApp ONTAP, SolidFire, and E-Series storage. |
|
|
|||
Azure Disk |
|
Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. |
StorageClass objects are currently a globally scoped object and need to be
created by cluster-admin
or storage-admin
users.
For GCE and AWS, a default StorageClass is created during OpenShift Origin installation. You can change the default StorageClass or delete it. |
There are currently six plug-ins that are supported. The following sections describe the basic object definition for a StorageClass and specific examples for each of the supported plug-in types.
kind: StorageClass (1)
apiVersion: storage.k8s.io/v1 (2)
metadata:
name: foo (3)
annotations: (4)
...
provisioner: kubernetes.io/plug-in-type (5)
parameters: (6)
param1: value
...
paramN: value
1 | (required) The API object type. |
2 | (required) The current apiVersion. |
3 | (required) The name of the StorageClass. |
4 | (optional) Annotations for the StorageClass |
5 | (required) The type of provisioner associated with this storage class. |
6 | (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. |
To set a StorageClass as the cluster-wide default:
storageclass.kubernetes.io/is-default-class: "true"
This enables any Persistent Volume Claim (PVC) that does not specify a specific volume to automatically be provisioned through the default StorageClass
Beta annotation |
To set a StorageClass description:
kubernetes.io/description: My StorageClass Description
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast (1)
availability: nova (2)
fsType: ext4 (3)
1 | Volume type created in Cinder. Default is empty. |
2 | Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Origin cluster has a node. |
3 | File system that is created on dynamically provisioned volumes. This value is
copied to the fsType field of dynamically provisioned persistent volumes and
the file system is created when the volume is mounted for the first time. The
default value is ext4 . |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1 (1)
zone: us-east-1d (2)
iopsPerGB: "10" (3)
encrypted: "true" (4)
kmsKeyId: keyvalue (5)
fsType: ext4 (6)
1 | Select from io1 , gp2 , sc1 , st1 . The default is gp2 . See
AWS documentation for valid Amazon Resource Name (ARN) values. |
2 | AWS zone. If no zone is specified, volumes are generally round-robined across all active zones where the OpenShift Origin cluster has a node. Zone and zones parameters must not be used at the same time. |
3 | Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See AWS documentation for further details. |
4 | Denotes whether to encrypt the EBS volume. Valid values are true or false . |
5 | Optional. The full ARN of the key to use when encrypting the volume. If none
is supplied, but encypted is set to true , then AWS generates a key. See
AWS
documentation for a valid ARN value. |
6 | File system that is created on dynamically provisioned volumes. This value is
copied to the fsType field of dynamically provisioned persistent volumes and
the file system is created when the volume is mounted for the first time. The
default value is ext4 . |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard (1)
zone: us-central1-a (2)
zones: us-central1-a, us-central1-b, us-east1-b (3)
fsType: ext4 (3)
1 | Select either pd-standard or pd-ssd . The default is pd-ssd . |
2 | GCE zone. If no zone is specified, volumes are generally round-robined across all active zones where the OpenShift Origin cluster has a node. Zone and zones parameters must not be used at the same time. |
3 | A comma-separated list of GCE zone(s). If no zone is specified, volumes are generally round-robined across all active zones where the OpenShift Origin cluster has a node. Zone and zones parameters must not be used at the same time. |
4 | File system that is created on dynamically provisioned volumes. This value is
copied to the fsType field of dynamically provisioned persistent volumes and
the file system is created when the volume is mounted for the first time. The
default value is ext4 . |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8081" (1)
restuser: "admin" (2)
secretName: "heketi-secret" (3)
secretNamespace: "default" (4)
gidMin: "40000" (5)
gidMax: "50000" (6)
1 | Gluster REST service/Heketi service URL that provisions Gluster
volumes on demand. The general format should be
{http/https}://{IPaddress}:{Port} . This is a mandatory parameter for the
GlusterFS dynamic provisioner. If the Heketi service is exposed as a routable
service in the OpenShift Origin, it will have a resolvable fully qualified domain
name and Heketi service URL. For additional information and configuration, See
Container-Native
Storage for OpenShift Container Platform. |
2 | Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool. |
3 | Identification of a Secret instance that contains a user password to use when
talking to the Gluster REST service. Optional; an empty password will be used
when both secretNamespace and secretName are omitted. The provided secret
must be of type "kubernetes.io/glusterfs" . |
4 | The namespace of mentioned secretName . Optional; an empty password will be used
when both secretNamespace and secretName are omitted. The provided secret
must be of type "kubernetes.io/glusterfs" . |
5 | Optional. The minimum value of GID range for the storage class. |
6 | Optional. The maximum value of GID range for the storage class. |
When the gidMin
and gidMax
values are not specified, the volume is
provisioned with a value between 2000 and 2147483647, which are defaults for
gidMin
and gidMax
respectively. If specified, a unique value (GID) in this
range (gidMin-gidMax
) is used for dynamically provisioned volumes. The GID of
the provisioned volume will be set to this value. It is required to run Heketi
version 3 or later to make use of this feature. This GID is released from the
pool when the subjected volume is deleted. The GID pool is per storage class, if
2 or more storage classes have GID ranges that overlap there will be duplicate
GIDs dispatched by the provisioner.
When the persistent volumes are dynamically provisioned, the Gluster plug-in
automatically creates an endpoint and a headless service of the name
gluster-dynamic-<claimname>
. When the persistent volume claim is deleted, this
dynamic endpoint and service is deleted automatically.
apiVersion: v1
kind: Secret
metadata:
name: heketi-secret
namespace: default
data:
# base64 encoded password. E.g.: echo -n "mypassword" | base64
key: bXlwYXNzd29yZA==
type: kubernetes.io/glusterfs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.16.153.105:6789 (1)
adminId: admin (2)
adminSecretName: ceph-secret (3)
adminSecretNamespace: kube-system (4)
pool: kube (5)
userId: kube (6)
userSecretName: ceph-secret-user (7)
fsType: ext4 (8)
1 | Ceph monitors, comma-delimited. It is required. |
2 | Ceph client ID that is capable of creating images in the pool. Default is "admin". |
3 | Secret Name for adminId . It is required. The provided secret must have type "kubernetes.io/rbd". |
4 | The namespace for adminSecret . Default is "default". |
5 | Ceph RBD pool. Default is "rbd". |
6 | Ceph client ID that is used to map the Ceph RBD image. Default is the same as adminId . |
7 | The name of Ceph Secret for userId to map Ceph RBD image. It must exist in the same namespace as PVCs. It is required. |
8 | File system that is created on dynamically provisioned volumes. This value is
copied to the fsType field of dynamically provisioned persistent volumes and
the file system is created when the volume is mounted for the first time. The
default value is ext4 . |
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gold
provisioner: netapp.io/trident (1)
parameters: (2)
media: "ssd"
provisioningType: "thin"
snapshots: "true"
Trident uses the parameters as selection criteria for the different pools of storage that are registered with it. Trident itself is configured separately.
1 | For more information about installing Trident with OpenShift Origin, see the Trident documentation. |
2 | For more information about supported parameters, see the storage attributes section of the Trident documentation. |
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume (1)
parameters:
diskformat: thin (2)
1 | For more information about using VMware vSphere with OpenShift Origin, see the VMware vSphere documentation. |
2 | diskformat : thin , zeroedthick and eagerzeroedthick . See vSphere docs for details. Default: thin |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS (1)
location: eastus (2)
storageAccount: azure_storage_account_name (3)
1 | Azure storage account SKU tier. Default is empty. |
2 | Azure storage account location. Default is empty. |
3 | Azure storage account name. This must reside in the same resource group as the cluster. If a storage account is specified, the location is ignored. If a storage account is not specified, a new storage account gets created in the same resource group as the cluster. |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Standard_LRS (1)
kind: Shared (2)
1 | Azure storage account SKU tier. Default is empty. Note: Premium VM can attach both Standard_LRS and Premium_LRS disks, Standard VM can only attach Standard_LRS disks, Managed VM can only attach managed disks, and unmanaged VM can only attach unmanaged disks. |
2 | possible values are shared (default), dedicated , and managed . When kind is shared , all unmanaged disks are created in a few shared storage accounts in the same resource group as the cluster. When kind is dedicated , a new dedicated storage account gets created for the new unmanaged disk in the same resource group as the cluster. When kind is managed , a new managed disk gets created. |
If you are using GCE and AWS, use the following process to change the default StorageClass:
List the StorageClass:
$ oc get storageclass NAME TYPE gp2 (default) kubernetes.io/aws-ebs (1) standard kubernetes.io/gce-pd
1 | (default) denotes the default StorageClass. |
Change the value of the annotation storageclass.kubernetes.io/is-default-class
to false
for the default StorageClass:
$ oc patch storageclass gp2 -p '{"metadata": {"annotations": \ {"storageclass.kubernetes.io/is-default-class": "false"}}}'
Make another StorageClass the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true
.
$ oc patch storageclass standard -p '{"metadata": {"annotations": \ {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Verify the changes:
$ oc get storageclass NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/gce-pd