Atom feed of this document
  
 

 EMC VNX direct driver

Use the EMC VNX direct driver to create, attach, detach, and delete volumes, create and delete snapshots, and so on. This driver is based on the Cinder-defined ISCSIDriver driver.

To complete volume operations, the driver uses the NaviSec command-line interface (CLI) to communicate with back-end EMC VNX storage.

 System requirements

  • Flare version 5.32 or later.

  • You must activate VNX Snapshot and Clone license for the array. Ensure that all the iSCSI ports from the VNX are accessible through OpenStack hosts.

  • Navisphere CLI v7.32 or later.

EMC storage VNX Series are supported.

 Supported operations

  • Create volume

  • Delete volume

  • Attach volume

  • Detach volume

  • Create snapshot

  • Delete snapshot

  • Create volume from snapshot

  • Create cloned volume

  • Copy image to volume

  • Copy volume to image

  • Extend volume

 Set up the VNX direct driver

Complete these high-level tasks to set up the VNX direct driver:

  1. Install NaviSecCLI. You must install the NaviSecCLI tool on the controller node and all the Cinder nodes in an OpenStack deployment. See the section called “Install NaviSecCLI”.

  2. Register with VNX. See the section called “Register with VNX”

 Install NaviSecCLI

Log in to the EMC's support web site (login is required), and download the NaviSecCLI package. Then, install the package:

On Ubuntu x64:

 

Procedure 1.1. To install NaviSecCLI on Ubuntu x64

  1. Create the /opt/Navisphere/bin/ directory:

    # mkdir -f /opt/Navisphere/bin/
  2. Copy the RPM package into the /opt/Navisphere/bin/ directory.

  3. Use alien to install the RPM package on Ubuntu:

    # cd /opt/Navisphere/bin
    # sudo apt-get install alien -y
    # sudo alien -i NaviCLI-Linux-64-x86-en_US-7.xx.xx.x.xx.x86_64.rpm

For all the other variants of Linux, install the rpm as usual.

 Register with VNX

To export a VNX volume to a compute node or a volume node, you must register the node with VNX.

 

Procedure 1.2. To register the node

  1. On the compute node or volume node 1.1.1.1, do the following (assume 10.10.61.35 is the iSCSI target):

    # /etc/init.d/open-iscsi start
    # iscsiadm -m discovery -t st -p 10.10.61.35
    # cd /etc/iscsi
    # more initiatorname.iscsi
    # iscsiadm -m node
  2. Log in to VNX from the node using the target corresponding to the SPA port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l

    Where iqn.1992-04.com.emc:cx.apm01234567890.a0 is the initiator name of the node. Login to Unisphere, go to VNX00000->Hosts->Initiators, Refresh and wait until initiator iqn.1992-04.com.emc:cx.apm01234567890.a0 with SP Port A-8v0 appears.

  3. Click Register, select CLARiiON/VNX, and enter the host name myhost1 and IP address myhost1. Click Register. Now host 1.1.1.1 also appears under Hosts->Host List.

  4. Log out of VNX on the node:

    # iscsiadm -m node -u
  5. Log in to VNX from the node using the target corresponding to the SPB port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l
  6. In Unisphere register the initiator with the SPB port.

  7. Log out:

    # iscsiadm -m node -u
 cinder.conf configuration file

Make the following changes in /etc/cinder/cinder.conf.

For the VNX iSCSI driver, add the following entries, where 10.10.61.35 is the IP address of the VNX iSCSI target, 10.10.72.41 is the IP address of the VNX array (SPA or SPB), default_timeout is the default time out for CLI operations in minutes, and max_luns_per_storage_group is the default max number of LUNs in a storage group:

iscsi_ip_address = 10.10.61.35
san_ip = 10.10.72.41
san_login = global_username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
storage_vnx_pool_name = poolname
default_timeout = 10
max_luns_per_storage_group=256
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
[Note]Note

To find out max_luns_per_storage_group for each VNX model, refer to the EMC's support web site (login is required).

Restart the cinder-volume service.

 Volume type support

Volume type support allows user to choose thick/thin provisioning capabilities.

Here is an example of how to setup volume type. First create volume types. Then define extra specs for each volume type.

 

Procedure 1.3. To set up volume types

  1. Setup volume types:

    $ cinder type-create "TypeA"
    $ cinder type-create "TypeB"
  2. Setup volume type extra specs:

    $ cinder type-key "TypeA" set storagetype:provisioning=thick
    $ cinder type-key "TypeB" set storagetype:provisioning=thin

The previous example creates two volume types: TypeA and TypeB. For TypeA, storagetype:provisioning is set to thick. Similarly for TypeB, storagetype:provisioning is set to thin. If storagetype:provisioning is not specified, it will be default to thick.

Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...