[Pkg-zfsonlinux-devel] Bug#867299: Bug#867299: Bug#867299: zfs-dkms: Unable to expand a pool when underlying vdev has grown.
Craig Sanders
cas at taz.net.au
Thu Jul 6 03:31:04 UTC 2017
On Thu, Jul 06, 2017 at 01:44:27AM +0800, Aron Xu wrote:
> It's not advisable to grow a pool by increasing the vdev size, but rather
> you can add more vdevs to the pool to grow it in capacity, and it would be
> safer for your underlying data.
you can add vdevs and you can also replace the devices in a vdev with larger
devices. Adding vdevs is faster and less work but it needs to be understood
that vdevs can never be removed once added, you will have permanently changed
the structure of your pool. Replacing the devices within a vdev requires each
drive to be replaced individually and is slow, tedious work with a lot of time
spent waiting for one resilver to finish before you can proceed with the next.
To increase the size of a vdev you have to replace ALL of its block devices
(disks, partitions, logical volumes, files, etc) with larger ones using the
'zpool replace' command.
When all devices in a vdev have been replaced with larger devices, the vdev
and thus the zpool will increase in size. e.g. to increase a raid-z made up
of 4x1TB drives to 4x2TB drives, you have to replace each drive, **one at a
time**.
The zpool will only gain the extra storage capacity when ALL drives in a vdev
have been replaced. The same is true for, e.g., a 2-drive mirror vdev. the
same is true for a single-drive vdev with no redundancy, but it only has one
device so only one has to be replaced of course.
i.e. If have a 2x1TB mirror vdev and only replace one of the drives with a 2TB
drive, the vdev will still be only 1TB in size. It won't become a 2TB vdev
(or increase in size at all) until you replace the 2nd 1TB drive with another
2TB or larger drive.
To enlarge a vdev that has a single 20GB device to one with 40GB, you have
to use 'zpool replace' to replace the original 20GB device with a new 40GB
device. Note that if you simply 'zpool add' a new 40GB device to the pool you
will end up with what is effectively a RAID-0 of two vdevs, a 20GB vdev and a
40GB vdev...you do not want to do this.
from the zpool man page:
zpool replace [-f] [-o property=value] pool old_device [new_device]
Replaces old_device with new_device. This is equivalent to
attaching new_device, waiting for it to resilver, and then
detaching old_device.
The size of new_device must be greater than or equal to the minimum
size of all the devices in a mirror or raidz con‐ figuration.
new_device is required if the pool is not redundant. If new_device
is not specified, it defaults to old_device. This form of
replacement is useful after an existing disk has failed and has
been physically replaced. In this case, the new disk may have the
same /dev path as the old device, even though it is actually a
different disk. ZFS recognizes this.
-f Forces use of new_device, even if its appears to be in use. Not
all devices can be overridden in this manner.
-o property=value Sets the given pool properties. See the
"Properties" section for a list of valid properties that can be
set. The only property supported at the moment is ashift. Do
note that some properties (among them ashift) are not inherited
from a previous vdev. They are vdev specific, not pool specific.
Note that for large pools with lots of drives it is faster and simpler and
less prone to human error (and disk failures while resilvering) to create a
completely new, larger, zpool, use 'zfs send' to transfer the contents of the
old pool, and then retire the old one.
craig
--
craig sanders <cas at taz.net.au>
More information about the Pkg-zfsonlinux-devel
mailing list