This OpenStack Block Storage volume drivers provides iSCSI and NFS support for Hitachi NAS Platform (HNAS) Models 3080, 3090, 4040, 4060, 4080, and 4100 with NAS OS 12.2 or higher.
The NFS and iSCSI drivers support these operations:
Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to configure HNAS to work with the drivers. Additionally:
1 storage pool, 1 EVS and 1 file
system
to be able to run any of the HNAS drivers.replication target
and
should be mounted.all compute nodes and controllers
in the cloud must have
access to the EVSs./
) and set the :guilabel: Show snapshots option to hide and
disable access
.norootsquash
in the share
Access configuration
so Block Storage services can change the
permissions of its volumes. For example, "* (rw, norootsquash)"
.max-nfs-version
to 3. Refer to Hitachi NAS Platform
command line reference to see how to configure this option.The HNAS drivers are supported for Red Hat Enterprise Linux OpenStack Platform, SUSE OpenStack Cloud, and Ubuntu OpenStack. The following packages must be installed in all compute, controller and storage (if any) nodes:
nfs-utils
for Red Hat Enterprise Linux OpenStack Platformnfs-client
for SUSE OpenStack Cloudnfs-common
, libc6-i386
for Ubuntu OpenStackIf you are installing the driver from an RPM or DEB package, follow the steps below:
Install the dependencies:
In Red Hat:
# yum install nfs-utils nfs-utils-lib
Or in Ubuntu:
# apt-get install nfs-common
Or in SUSE:
# zypper install nfs-client
If you are using Ubuntu 12.04, you also need to install libc6-i386
# apt-get install libc6-i386
Configure the driver as described in the Driver configuration section.
Restart all Block Storage services (volume, scheduler, and backup).
HNAS supports a variety of storage options and file system capabilities,
which are selected through the definition of volume types combined with the
use of multiple back ends and multiple services. Each back end can configure
up to 4 service pools
, which can be mapped to cinder volume types.
The configuration for the driver is read from the back-end sections of the
cinder.conf
. Each back-end section must have the appropriate configurations
to communicate with your HNAS back end, such as the IP address of the HNAS EVS
that is hosting your data, HNAS SSH access credentials, the configuration of
each of the services in that back end, and so on. You can find examples of such
configurations in the Configuration example section.
Note
HNAS cinder drivers still support the XML configuration the
same way it was in the older versions, but we recommend configuring the
HNAS cinder drivers only through the cinder.conf
file,
since the XML configuration file from previous versions is being
deprecated as of Newton Release.
Note
We do not recommend the use of the same NFS export or file system (iSCSI driver) for different back ends. If possible, configure each back end to use a different NFS export/file system.
The following is the definition of each configuration option that can be used
in a HNAS back-end section in the cinder.conf
file:
Option | Type | Default | Description |
---|---|---|---|
volume_backend_name |
Optional | N/A | A name that identifies the back end and can be used as an extra-spec to redirect the volumes to the referenced back end. |
volume_driver |
Required | N/A | The python module path to the HNAS volume driver python class. When installing through the rpm or deb packages, you should configure this to cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver for the iSCSI back end or cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver for the NFS back end. |
nfs_shares_config |
Required (only for NFS) | /etc/cinder/nfs_shares | Path to the nfs_shares file. This is required by the base cinder
generic NFS driver and therefore also required by the HNAS NFS driver.
This file should list, one per line, every NFS share being used by the
back end. For example, all the values found in the configuration keys
hnas_svcX_hdp in the HNAS NFS back-end sections. |
hnas_mgmt_ip0 |
Required | N/A | HNAS management IP address. Should be the IP address of the Admin EVS. It is also the IP through which you access the web SMU administration frontend of HNAS. |
hnas_chap_enabled |
Optional (iSCSI only) | True | Boolean tag used to enable CHAP authentication protocol for iSCSI driver. |
hnas_username |
Required | N/A | HNAS SSH username |
hds_hnas_nfs_config_file | hds_hnas_iscsi_config_file |
Optional (deprecated) | /opt/hds/hnas/cinder_[nfs|iscsi]_conf.xml | Path to the deprecated XML configuration file (only required if using the XML file) |
hnas_cluster_admin_ip0 |
Optional (required only for HNAS multi-farm setups) | N/A | The IP of the HNAS farm admin. If your SMU controls more than one system or cluster, this option must be set with the IP of the desired node. This is different for HNAS multi-cluster setups, which does not require this option to be set. |
hnas_ssh_private_key |
Optional | N/A | Path to the SSH private key used to authenticate to the HNAS SMU. Only required if you do not want to set hnas_password. |
hnas_ssh_port |
Optional | 22 | Port on which HNAS is listening for SSH connections |
hnas_password |
Required (unless hnas_ssh_private_key is provided) | N/A | HNAS password |
hnas_svcX_hdp [1] |
Required (at least 1) | N/A | HDP (export or file system) where the volumes will be created. Use exports paths for the NFS backend or the file system names for the iSCSI backend (note that when using the file system name, it does not contain the IP addresses of the HDP) |
hnas_svcX_iscsi_ip |
Required (only for iSCSI) | N/A | The IP of the EVS that contains the file system specified in hnas_svcX_hdp |
hnas_svcX_volume_type |
Required | N/A | A unique string that is used to refer to this pool within the
context of cinder. You can tell cinder to put volumes of a specific
volume type into this back end, within this pool. See,
Service Labels and Configuration example sections
for more details. |
[1] | Replace X with a number from 0 to 3 (keep the sequence when configuring the driver) |
HNAS driver supports differentiated types of service using the service labels. It is possible to create up to 4 types of them for each back end. (For example gold, platinum, silver, ssd, and so on).
After creating the services in the cinder.conf
configuration file, you
need to configure one cinder volume_type
per service. Each volume_type
must have the metadata service_label with the same name configured in the
hnas_svcX_volume_type option
of that service. See the
Configuration example section for more details. If the volume_type
is not set, the cinder service pool with largest available free space or
other criteria configured in scheduler filters.
$ openstack volume type create default
$ openstack volume type set --property service_label=default default
$ openstack volume type create platinum-tier
$ openstack volume type set --property service_label=platinum platinum
You can deploy multiple OpenStack HNAS Driver instances (back ends) that each controls a separate HNAS or a single HNAS. If you use multiple cinder back ends, remember that each cinder back end can host up to 4 services. Each back-end section must have the appropriate configurations to communicate with your HNAS back end, such as the IP address of the HNAS EVS that is hosting your data, HNAS SSH access credentials, the configuration of each of the services in that back end, and so on. You can find examples of such configurations in the Configuration example section.
If you want the volumes from a volume_type to be casted into a specific
back end, you must configure an extra_spec in the volume_type
with the
value of the volume_backend_name
option from that back end.
For multiple NFS back ends configuration, each back end should have a
separated nfs_shares_config
and also a separated nfs_shares file
defined (For example, nfs_shares1
, nfs_shares2
) with the desired
shares listed in separated lines.
Note
As of the Newton OpenStack release, the user can no longer run the driver using a locally installed instance of the SSC utility package. Instead, all communications with the HNAS back end are handled through SSH.
You can use your username and password to authenticate the Block Storage node
to the HNAS back end. In order to do that, simply configure hnas_username
and hnas_password
in your back end section within the cinder.conf
file.
For example:
[hnas-backend]
…
hnas_username = supervisor
hnas_password = supervisor
Alternatively, the HNAS cinder driver also supports SSH authentication through public key. To configure that:
If you do not have a pair of public keys already generated, create it in the Block Storage node (leave the pass-phrase empty):
$ mkdir -p /opt/hitachi/ssh
$ ssh-keygen -f /opt/hds/ssh/hnaskey
Change the owner of the key to cinder (or the user the volume service will be run as):
# chown -R cinder.cinder /opt/hitachi/ssh
Create the directory ssh_keys
in the SMU server:
$ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
Copy the public key to the ssh_keys
directory:
$ scp /opt/hitachi/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
Access the SMU server:
$ ssh [manager|supervisor]@<smu-ip>
Run the command to register the SSH keys:
$ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
Check the communication with HNAS in the Block Storage node:
For multi-farm HNAS:
$ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
Or, for Single-node/Multi-Cluster:
$ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc localhost df -a'
Configure your backend section in cinder.conf
to use your public key:
[hnas-backend]
…
hnas_ssh_private_key = /opt/hitachi/ssh/hnaskey
If there are some existing volumes on HNAS that you want to import to cinder, it is possible to use the manage volume feature to do this. The manage action on an existing volume is very similar to a volume creation. It creates a volume entry on cinder database, but instead of creating a new volume in the back end, it only adds a link to an existing volume.
Note
It is an admin only feature and you have to be logged as an user with admin rights to be able to use this.
For NFS:
For iSCSI:
By CLI:
$ cinder manage [--id-type <id-type>][--name <name>][--description <description>]
[--volume-type <volume-type>][--availability-zone <availability-zone>]
[--metadata [<key=value> [<key=value> ...]]][--bootable] <host> <identifier>
Example:
For NFS:
$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-nfs#test_silver 172.24.44.34:/silver/volume-test
For iSCSI:
$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-iscsi#test_silver filesystem-test/volume-test
The manage snapshots feature works very similarly to the manage volumes feature, currently supported on HNAS cinder drivers. So, if you have a volume already managed by cinder which has snapshots that are not managed by cinder, it is possible to use manage snapshots to import these snapshots and link them with their original volume.
Note
For HNAS NFS cinder driver, the snapshots of volumes are clones of volumes that where created using file-clone-create, not the HNAS snapshot-* feature. Check the HNAS users documentation to have details about those 2 features.
Currently, the manage snapshots function does not support importing snapshots
(generally created by storage’s file-clone operation)
without parent volumes
or when the parent volume is in-use
. In this
case, the manage volumes
should be used to import the snapshot as a normal
cinder volume.
Also, it is an admin only feature and you have to be logged as a user with admin rights to be able to use this.
Note
Although there is a verification to prevent importing snapshots using non-related volumes as parents, it is possible to manage a snapshot using any related cloned volume. So, when managing a snapshot, it is extremely important to make sure that you are using the correct parent volume.
For NFS:
$ cinder snapshot-manage <volume> <identifier>
Example:
$ cinder snapshot-manage 061028c0-60cf-499f-99e2-2cd6afea081f 172.24.44.34:/export1/snapshot-test
Note
This feature is currently available only for HNAS NFS Driver.
Below are configuration examples for both NFS and iSCSI backends:
HNAS NFS Driver
For HNAS NFS driver, create this section in your cinder.conf
file:
[hnas-nfs]
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
nfs_shares_config = /home/cinder/nfs_shares
volume_backend_name = hnas_nfs_backend
hnas_username = supervisor
hnas_password = supervisor
hnas_mgmt_ip0 = 172.24.44.15
hnas_svc0_volume_type = nfs_gold
hnas_svc0_hdp = 172.24.49.21:/gold_export
hnas_svc1_volume_type = nfs_platinum
hnas_svc1_hdp = 172.24.49.21:/silver_platinum
hnas_svc2_volume_type = nfs_silver
hnas_svc2_hdp = 172.24.49.22:/silver_export
hnas_svc3_volume_type = nfs_bronze
hnas_svc3_hdp = 172.24.49.23:/bronze_export
Add it to the enabled_backends
list, under the DEFAULT
section
of your cinder.conf
file:
[DEFAULT]
enabled_backends = hnas-nfs
Add the configured exports to the nfs_shares
file:
172.24.49.21:/gold_export
172.24.49.21:/silver_platinum
172.24.49.22:/silver_export
172.24.49.23:/bronze_export
Register a volume type with cinder and associate it with this backend:
$ openstack volume type create hnas_nfs_gold
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
service_label=nfs_gold hnas_nfs_gold
$ openstack volume type create hnas_nfs_platinum
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
service_label=nfs_platinum hnas_nfs_platinum
$ openstack volume type create hnas_nfs_silver
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
service_label=nfs_silver hnas_nfs_silver
$ openstack volume type create hnas_nfs_bronze
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
service_label=nfs_bronze hnas_nfs_bronze
HNAS iSCSI Driver
For HNAS iSCSI driver, create this section in your cinder.conf
file:
[hnas-iscsi]
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
volume_backend_name = hnas_iscsi_backend
hnas_username = supervisor
hnas_password = supervisor
hnas_mgmt_ip0 = 172.24.44.15
hnas_chap_enabled = True
hnas_svc0_volume_type = iscsi_gold
hnas_svc0_hdp = FS-gold
hnas_svc0_iscsi_ip = 172.24.49.21
hnas_svc1_volume_type = iscsi_platinum
hnas_svc1_hdp = FS-platinum
hnas_svc1_iscsi_ip = 172.24.49.21
hnas_svc2_volume_type = iscsi_silver
hnas_svc2_hdp = FS-silver
hnas_svc2_iscsi_ip = 172.24.49.22
hnas_svc3_volume_type = iscsi_bronze
hnas_svc3_hdp = FS-bronze
hnas_svc3_iscsi_ip = 172.24.49.23
Add it to the enabled_backends
list, under the DEFAULT
section
of your cinder.conf
file:
[DEFAULT]
enabled_backends = hnas-nfs, hnas-iscsi
Register a volume type with cinder and associate it with this backend:
$ openstack volume type create hnas_iscsi_gold
$ openstack volume type set --property volume_backend_name=hnas_iscsi_backend \
service_label=iscsi_gold hnas_iscsi_gold
$ openstack volume type create hnas_iscsi_platinum
$ openstack volume type set --property volume_backend_name=hnas_iscsi_backend \
service_label=iscsi_platinum hnas_iscsi_platinum
$ openstack volume type create hnas_iscsi_silver
$ openstack volume type set --property volume_backend_name=hnas_iscsi_backend \
service_label=iscsi_silver hnas_iscsi_silver
$ openstack volume type create hnas_iscsi_bronze
$ openstack volume type set --property volume_backend_name=hnas_iscsi_backend \
service_label=iscsi_bronze hnas_iscsi_bronze
The get_volume_stats()
function always provides the available
capacity based on the combined sum of all the HDPs that are used in
these services labels.
After changing the configuration on the storage node, the Block Storage driver must be restarted.
On Red Hat, if the system is configured to use SELinux, you need to
set virt_use_nfs = on
for NFS driver work properly.
# setsebool -P virt_use_nfs on
It is not possible to manage a volume if there is a slash (/
) or
a colon (:
) in the volume name.
File system auto-expansion
: Although supported, we do not recommend using
file systems with auto-expansion setting enabled because the scheduler uses
the file system capacity reported by the driver to determine if new volumes
can be created. For instance, in a setup with a file system that can expand
to 200GB but is at 100GB capacity, with 10GB free, the scheduler will not
allow a 15GB volume to be created. In this case, manual expansion would
have to be triggered by an administrator. We recommend always creating the
file system at the maximum capacity
or periodically expanding the file
system manually.
iSCSI driver limitations: The iSCSI driver has a limit of 1024
volumes
attached to instances.
The hnas_svcX_volume_type
option must be unique for a given back end.
SSC simultaneous connections limit: In very busy environments, if 2 or
more volume hosts are configured to use the same storage, some requests
(create, delete and so on) can have some attempts failed and re-tried (
5 attempts
by default) due to an HNAS connection limitation (
max of 5
simultaneous connections).
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.