Most of the basic information regarding devices is covered in Components of a ZFS Storage Pool. Once a pool has been created, you can perform several tasks to manage the physical devices within the pool.
You can dynamically add space to a pool by adding a new top-level virtual device. This space is immediately available to all datasets within the pool. To add a new virtual device to a pool, use the zpool add command. For example:
# zpool add zeepool mirror c2t1d0 c2t2d0
The format of the virtual devices is the same as for the zpool
create command, and the same rules apply. Devices are checked to
determine if they are in use, and the command cannot change the replication
level without the
f
option. The command also supports the
n
option so that you can perform a dry run. For example:
# zpool add -n zeepool mirror c3t1d0 c3t2d0
would update 'zeepool' to the following configuration:
zeepool
mirror
c1t0d0
c1t1d0
mirror
c2t1d0
c2t2d0
mirror
c3t1d0
c3t2d0
This command syntax would add mirrored devices c3t1d0
and c3t2d0
to zeepool
's existing configuration.
For more information about how virtual device validation is done, see Detecting in Use Devices.
In addition to the zpool add command, you can use the zpool attach command to add a new device to an existing mirrored or non-mirrored device. For example:
# zpool attach zeepool c1t1d0 c2t1d0
If the existing device is part of a two-way mirror, attaching the new device, creates a three-way mirror, and so on. In either case, the new device begins to resilver immediately.
In is example, zeepool
is an existing two-way mirror
that is transformed to a three-way mirror by attaching c2t1d0
,
the new device, to the existing device, c1t1d0
.
You can use the zpool detach command to detach a device from a pool. For example:
# zpool detach zeepool c2t1d0
However, this operation is refused if there are no other valid replicas of the data. For example:
# zpool detach newpool c1t2d0
cannot detach c1t2d0: only applicable to mirror and replacing vdevs
ZFS allows individual devices to be taken offline or brought online. When hardware is unreliable or not functioning properly, ZFS continues to read or write data to the device, assuming the condition is only temporary. If the condition is not temporary, it is possible to instruct ZFS to ignore the device by bringing it offline. ZFS does not send any requests to an offlined device.
Devices do not need to be taken offline in order to replace them.
You can use the offline command when you need to temporarily disconnect storage. For example, if you need to physically disconnect an array from one set of Fibre Channel switches and connect the array to a different set, you could take the LUNs offline from the array that was used in ZFS storage pools. After the array was reconnected and operational on the new set of switches, you could then bring the same LUNs online. Data that had been added to the storage pools while the LUNs were offline would resilver to the LUNs after they were brought back online.
This scenario is possible assuming that the systems in question see the storage once it is attached to the new switches, possibly through different controllers than before, and your pools are set up as RAID-Z or mirrored configurations.
You can take a device offline by using the zpool offline command. The device can be specified by path or by short name, if the device is a disk. For example:
# zpool offline tank c1t0d0
bringing device c1t0d0 offline
You cannot take a pool offline to the point where it becomes faulted. For example, you cannot take offline two devices out of a RAID-Z configuration, nor can you take offline a top-level virtual device.
# zpool offline tank c1t0d0
cannot offline c1t0d0: no valid replicas
Currently, you cannot replace a device that has been taken offline.
Offlined devices show up in the OFFLINE
state
when you query pool status. For information about querying pool status, see Querying ZFS Storage Pool Status.
By default, the offline state is persistent. The device remains offline when the system is rebooted.
To temporarily take a device offline, use the zpool offline
t
option. For example:
# zpool offline -t tank c1t0d0
bringing device 'c1t0d0' offline
When the system is rebooted, this device is automatically returned to
the ONLINE
state.
For more information on device health, see Health Status of ZFS Storage Pools.
Once a device is taken offline, it can be restored by using the zpool online command:
# zpool online tank c1t0d0
bringing device c1t0d0 online
When a device is brought online, any data that has been written to the pool is resynchronized to the newly available device. Note that you cannot use device onlining to replace a disk. If you offline a device, replace the drive, and try to bring it online, it remains in the faulted state.
If you attempt to online a faulted device, a message similar to the following is displayed from fmd:
# zpool online tank c1t0d0
Bringing device c1t0d0 online
#
SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Fri Mar 17 14:38:47 MST 2006
PLATFORM: SUNW,Ultra-60, CSN: -, HOSTNAME: neo
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 043bb0dd-f0a5-4b8f-a52d-8809e2ce2e0a
DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: Fault tolerance of the pool may be compromised.
REC-ACTION: Run 'zpool status -x' and replace the bad device.
For more information on replacing a faulted device, see Repairing a Missing Device.
If a device is taken offline due to a failure that causes errors to be listed in the zpool status output, you can clear the error counts with the zpool clear command.
If specified with no arguments, this command clears all device errors within the pool. For example:
# zpool clear tank
If one or more devices are specified, this command only clear errors associated with the specified devices. For example:
# zpool clear tank c1t0d0
For more information on clearing zpool errors, see Clearing Transient Errors.
You can replace a device in a storage pool by using the zpool replace command.
# zpool replace tank c1t1d0 c1t2d0
In this example, the previous device, c1t1d0
, is
replaced by c1t2d0
.
The replacement device must be greater than or equal to the minimum size of all the devices in a mirror or RAID-Z configuration. If the replacement device is larger, the pool size in an unmirrored or non RAID-Z configuration is increased when the replacement is complete.
For more information about replacing devices, see Repairing a Missing Device and Repairing a Damaged Device.