IBM General Parallel File System (GPFS) is a cluster file system that provides concurrent access to file systems from multiple nodes. The storage provided by these nodes can be direct attached, network attached, SAN attached, or a combination of these methods. GPFS provides many features beyond common data access, including data replication, policy based storage management, and space efficient file snapshot and clone operations.
The GPFS driver enables the use of GPFS in a fashion similar to that of the NFS driver. With the GPFS driver, instances do not actually access a storage device at the block level. Instead, volume backing files are created in a GPFS file system and mapped to instances, which emulate a block device.
Note
GPFS software must be installed and running on nodes where Block
Storage and Compute services run in the OpenStack environment. A
GPFS file system must also be created and mounted on these nodes
before starting the cinder-volume
service. The details of these
GPFS specific steps are covered in GPFS: Concepts, Planning, and
Installation Guide and GPFS: Administration and Programming
Reference.
Optionally, the Image service can be configured to store images on a GPFS file system. When a Block Storage volume is created from an image, if both image data and volume data reside in the same GPFS file system, the data from image file is moved efficiently to the volume file using copy-on-write optimization strategy.
To use the Block Storage service with the GPFS driver, first set the
volume_driver
in the cinder.conf
file:
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver
The following table contains the configuration options supported by the GPFS driver.
Note
The gpfs_images_share_mode
flag is only valid if the Image
Service is configured to use GPFS with the gpfs_images_dir
flag.
When the value of this flag is copy_on_write
, the paths
specified by the gpfs_mount_point_base
and gpfs_images_dir
flags must both reside in the same GPFS file system and in the same
GPFS file set.
It is possible to specify additional volume configuration options on a per-volume basis by specifying volume metadata. The volume is created using the specified options. Changing the metadata after the volume is created has no effect. The following table lists the volume creation options supported by the GPFS volume driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
gpfs_images_dir = None |
(String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. |
gpfs_images_share_mode = None |
(String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: “copy” specifies that a full copy of the image is made; “copy_on_write” specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. |
gpfs_max_clone_depth = 0 |
(Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. |
gpfs_mount_point_base = None |
(String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. |
gpfs_sparse_volumes = True |
(Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. |
gpfs_storage_pool = system |
(String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. |
nas_host = |
(String) IP address or Hostname of NAS system. |
nas_login = admin |
(String) User name to connect to NAS system. |
nas_password = |
(String) Password to connect to NAS system. |
nas_private_key = |
(String) Filename of private key to use for SSH authentication. |
nas_ssh_port = 22 |
(Port number) SSH port to use to connect to NAS system. |
This example shows the creation of a 50GB volume with an ext4
file
system labeled newfs
and direct IO enabled:
$ openstack volume create --property fstype=ext4 fslabel=newfs dio=yes \
--size 50 VOLUME
Volume snapshots are implemented using the GPFS file clone feature. Whenever a new snapshot is created, the snapshot file is efficiently created as a read-only clone parent of the volume, and the volume file uses copy-on-write optimization strategy to minimize data movement.
Similarly when a new volume is created from a snapshot or from an
existing volume, the same approach is taken. The same approach is also
used when a new volume is created from an Image service image, if the
source image is in raw format, and gpfs_images_share_mode
is set to
copy_on_write
.
The GPFS driver supports encrypted volume back end feature.
To encrypt a volume at rest, specify the extra specification
gpfs_encryption_rest = True
.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.