Atom feed of this document
  
 

 Chapter 14. Object Storage Node Lab

 Day 9, 13:30 to 14:45, 15:00 to 17:00

 Installing Object Node

  1. Create a swift user that the Object Storage Service can use to authenticate with the Identity Service. Choose a password and specify an email address for the swift user. Use the service tenant and give the user the admin role:

    $ keystone user-create --name=swift --pass=SWIFT_PASS \
      --email=[email protected]
    $ keystone user-role-add --user=swift --tenant=service --role=admin
  2. Create a service entry for the Object Storage Service:

    $ keystone service-create --name=swift --type=object-store \
      --description="OpenStack Object Storage"
    +-------------+----------------------------------+
    |   Property  |              Value               |
    +-------------+----------------------------------+
    | description |     OpenStack Object Storage     |
    |      id     | eede9296683e4b5ebfa13f5166375ef6 |
    |     name    |              swift               |
    |     type    |           object-store           |
    +-------------+----------------------------------+
    [Note]Note

    The service ID is randomly generated and is different from the one shown here.

  3. Specify an API endpoint for the Object Storage Service by using the returned service ID. When you specify an endpoint, you provide URLs for the public API, internal API, and admin API. In this guide, the controller host name is used:

    $ keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ object-store / {print $2}') \
      --publicurl='http://controller:8080/v1/AUTH_%(tenant_id)s' \
      --internalurl='http://controller:8080/v1/AUTH_%(tenant_id)s' \
      --adminurl=http://controller:8080
    +-------------+---------------------------------------------------+
    |   Property  |                       Value                       |
    +-------------+---------------------------------------------------+
    |   adminurl  |            http://controller:8080/                |
    |      id     |          9e3ce428f82b40d38922f242c095982e         |
    | internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s      |
    |  publicurl  | http://controller:8080/v1/AUTH_%(tenant_id)s      |
    |    region   |                     regionOne                     |
    |  service_id |          eede9296683e4b5ebfa13f5166375ef6         |
    +-------------+---------------------------------------------------+
  4. Create the configuration directory on all nodes:

    # mkdir -p /etc/swift
  5. Create /etc/swift/swift.conf on all nodes:

    [swift-hash]
    # random unique string that can never change (DO NOT LOSE)
    swift_hash_path_suffix = fLIbertYgibbitZ
    
[Note]Note

The suffix value in /etc/swift/swift.conf should be set to some random string of text to be used as a salt when hashing to determine mappings in the ring. This file must be the same on every node in the cluster!

Next, set up your storage nodes and proxy node. This example uses the Identity Service for the common authentication piece.

 Configuring Object Node

[Note]Note

Object Storage works on any file system that supports Extended Attributes (XATTRS). XFS shows the best overall performance for the swift use case after considerable testing and benchmarking at Rackspace. It is also the only file system that has been thoroughly tested. See the OpenStack Configuration Reference for additional recommendations.

  1. Install storage node packages:

    # apt-get install swift swift-account swift-container swift-object xfsprogs

    # yum install openstack-swift-account openstack-swift-container \
      openstack-swift-object xfsprogs xinetd

    # zypper install openstack-swift-account openstack-swift-container \
      openstack-swift-object python-xml xfsprogs xinetd
  2. For each device on the node that you want to use for storage, set up the XFS volume (/dev/sdb is used as an example). Use a single partition per drive. For example, in a server with 12 disks you may use one or two disks for the operating system which should not be touched in this step. The other 10 or 11 disks should be partitioned with a single partition, then formatted in XFS.

    # fdisk /dev/sdb
    # mkfs.xfs /dev/sdb1
    # echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
    # mkdir -p /srv/node/sdb1
    # mount /srv/node/sdb1
    # chown -R swift:swift /srv/node
  3. Create /etc/rsyncd.conf:

    Replace the content of /etc/rsyncd.conf with:

    uid = swift
    gid = swift
    log file = /var/log/rsyncd.log
    pid file = /var/run/rsyncd.pid
    address = STORAGE_LOCAL_NET_IP
    
    [account]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/account.lock
    
    [container]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/container.lock
    
    [object]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/object.lock
  4. (Optional) If you want to separate rsync and replication traffic to replication network, set STORAGE_REPLICATION_NET_IP instead of STORAGE_LOCAL_NET_IP:

    address = STORAGE_REPLICATION_NET_IP
  5. Edit the following line in /etc/default/rsync:

    RSYNC_ENABLE=true
  6. Edit the following line in /etc/xinetd.d/rsync:

    disable = false
  7. Start the rsync service:

    # service rsync start

    Start the xinetd service:

    # service xinetd start

    Start the xinetd service and configure it to start when the system boots:

    # service xinetd start
    # chkconfig xinetd on
    [Note]Note

    The rsync service requires no authentication, so run it on a local, private network.

  8. Create the swift recon cache directory and set its permissions:

    # mkdir -p /var/swift/recon
    # chown -R swift:swift /var/swift/recon

 Configuring Object Proxy

The proxy server takes each request and looks up locations for the account, container, or object and routes the requests correctly. The proxy server also handles API requests. You enable account management by configuring it in the /etc/swift/proxy-server.conf file.

[Note]Note

The Object Storage processes run under a separate user and group, set by configuration options, and referred to as swift:swift. The default user is swift.

  1. Install swift-proxy service:

    # apt-get install swift-proxy memcached python-keystoneclient python-swiftclient python-webob
    # yum install openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token
    # zypper install openstack-swift-proxy memcached python-swiftclient python-keystoneclient python-xml
  2. Modify memcached to listen on the default interface on a local, non-public network. Edit this line in the /etc/memcached.conf file:

    -l 127.0.0.1

    Change it to:

    -l PROXY_LOCAL_NET_IP
  3. Modify memcached to listen on the default interface on a local, non-public network. Edit the /etc/sysconfig/memcached file:

    OPTIONS="-l PROXY_LOCAL_NET_IP"
    MEMCACHED_PARAMS="-l PROXY_LOCAL_NET_IP"
  4. Restart the memcached service:

    # service memcached restart
  5. Start the memcached service and configure it to start when the system boots:

    # service memcached start
    # chkconfig memcached on
  6. Create Edit /etc/swift/proxy-server.conf:

    [DEFAULT]
    bind_port = 8080
    user = swift
    
    [pipeline:main]
    pipeline = healthcheck cache authtoken keystoneauth proxy-server
    
    [app:proxy-server]
    use = egg:swift#proxy
    allow_account_management = true
    account_autocreate = true
    
    [filter:keystoneauth]
    use = egg:swift#keystoneauth
    operator_roles = Member,admin,swiftoperator
    
    [filter:authtoken]
    paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
    
    # Delaying the auth decision is required to support token-less
    # usage for anonymous referrers ('.r:*').
    delay_auth_decision = true
    
    # cache directory for signing certificate
    signing_dir = /home/swift/keystone-signing
    
    # auth_* settings refer to the Keystone server
    auth_protocol = http
    auth_host = controller
    auth_port = 35357
    
    # the service tenant and swift username and password created in Keystone
    admin_tenant_name = service
    admin_user = swift
    admin_password = SWIFT_PASS
    
    [filter:cache]
    use = egg:swift#memcache
    
    [filter:catch_errors]
    use = egg:swift#catch_errors
    
    [filter:healthcheck]
    use = egg:swift#healthcheck
    
    [Note]Note

    If you run multiple memcache servers, put the multiple IP:port listings in the [filter:cache] section of the /etc/swift/proxy-server.conf file:

    10.1.2.3:11211,10.1.2.4:11211

    Only the proxy server uses memcache.

  7. Create the account, container, and object rings. The builder command creates a builder file with a few parameters. The parameter with the value of 18 represents 2 ^ 18th, the value that the partition is sized to. Set this “partition power” value based on the total amount of storage you expect your entire ring to use. The value 3 represents the number of replicas of each object, with the last value being the number of hours to restrict moving a partition more than once.

    # cd /etc/swift
    # swift-ring-builder account.builder create 18 3 1
    # swift-ring-builder container.builder create 18 3 1
    # swift-ring-builder object.builder create 18 3 1
  8. For every storage device on each node add entries to each ring:

    # swift-ring-builder account.builder add zZONE-STORAGE_LOCAL_NET_IP:6002[RSTORAGE_REPLICATION_NET_IP:6005]/DEVICE 100
    # swift-ring-builder container.builder add zZONE-STORAGE_LOCAL_NET_IP_1:6001[RSTORAGE_REPLICATION_NET_IP:6004]/DEVICE 100
    # swift-ring-builder object.builder add zZONE-STORAGE_LOCAL_NET_IP_1:6000[RSTORAGE_REPLICATION_NET_IP:6003]/DEVICE 100
    [Note]Note

    You must omit the optional STORAGE_REPLICATION_NET_IP parameter if you do not want to use dedicated network for replication.

    For example, if a storage node has a partition in Zone 1 on IP 10.0.0.1, the storage node has address 10.0.1.1 from replication network. The mount point of this partition is /srv/node/sdb1, and the path in /etc/rsyncd.conf is /srv/node/, the DEVICE would be sdb1 and the commands are:

    # swift-ring-builder account.builder add z1-10.0.0.1:6002R10.0.1.1:6005/sdb1 100
    # swift-ring-builder container.builder add z1-10.0.0.1:6001R10.0.1.1:6004/sdb1 100
    # swift-ring-builder object.builder add z1-10.0.0.1:6000R10.0.1.1:6003/sdb1 100
    [Note]Note

    If you assume five zones with one node for each zone, start ZONE at 1. For each additional node, increment ZONE by 1.

  9. Verify the ring contents for each ring:

    # swift-ring-builder account.builder
    # swift-ring-builder container.builder
    # swift-ring-builder object.builder
  10. Rebalance the rings:

    # swift-ring-builder account.builder rebalance
    # swift-ring-builder container.builder rebalance
    # swift-ring-builder object.builder rebalance
    [Note]Note

    Rebalancing rings can take some time.

  11. Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to each of the Proxy and Storage nodes in /etc/swift.

  12. Make sure the swift user owns all configuration files:

    # chown -R swift:swift /etc/swift
  13. Restart the Proxy service:

    # service swift-proxy restart
  14. Start the Proxy service and configure it to start when the system boots:

    # service openstack-swift-proxy start
    # chkconfig openstack-swift-proxy on

 Start Object Node Services

Now that the ring files are on each storage node, you can start the services. On each storage node, run the following command:

# for service in \
  swift-object swift-object-replicator swift-object-updater swift-object-auditor \
  swift-container swift-container-replicator swift-container-updater swift-container-auditor \
  swift-account swift-account-replicator swift-account-reaper swift-account-auditor; do \
      service $service start; done
# for service in \
  openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor \
  openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor \
  openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do \
    service $service start; chkconfig $service on; done
[Note]Note

To start all swift services at once, run the command:

# swift-init all start

To know more about swift-init command, run:

$ man swift-init
Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...