Forrest
2011-05-25 22:16:00 UTC
I need some advice on handling our new zpool.
We first received the x4540 thumper which came preconfigured with
raidz, which worked fine. Then, someone in our dept decided we needed
Veritas, which ended up not working out (I hate that product
anyway). Now, we're backing out Veritas, flattening the volumes to
free up disk space for creation of a zpool, where the data will be re-
migrated.
I read a couple of articles on "best practices" out there. It seems
that mirroring is the most recommended solution. The server has 30+
terabytes of data, but we are quickly eating it up with the media we
store, which is mostly video files and their associated data. These
volumes are in turn made available internally via NFS for different
purposes.
We've managed to free up these disks for the initial zpool, which I'll
then add others to (presuming mirroring):
c1t2d0s2
c1t3d0s2
c5t2d0s2
c5t4d0s2
c6t3d0s2
c6t4d0s2
c6t6d0s2
c6t7d0s2
I read about building your storage using dynamically striped RAID-Z
groups of (Y / X) devices. Sounds a little complicated to me.
So before I go in and make a config that I can't easily back out of
(grin), I thought someone out there might have some advice/tips about
the zpool config I build.
Another side issue is that we've been using rsync to replicate our
nightly data (runs hourly or so). We're going to back this out and
start sending ZFS snapshots to our remote thumper (similar system),
but I've read that there are problems maintaining metadata like NFS
handles and such, which must be in order on the remote system (it's a
failover). Anyone know the currents status of that issue?
Thanks in advance!
We first received the x4540 thumper which came preconfigured with
raidz, which worked fine. Then, someone in our dept decided we needed
Veritas, which ended up not working out (I hate that product
anyway). Now, we're backing out Veritas, flattening the volumes to
free up disk space for creation of a zpool, where the data will be re-
migrated.
I read a couple of articles on "best practices" out there. It seems
that mirroring is the most recommended solution. The server has 30+
terabytes of data, but we are quickly eating it up with the media we
store, which is mostly video files and their associated data. These
volumes are in turn made available internally via NFS for different
purposes.
We've managed to free up these disks for the initial zpool, which I'll
then add others to (presuming mirroring):
c1t2d0s2
c1t3d0s2
c5t2d0s2
c5t4d0s2
c6t3d0s2
c6t4d0s2
c6t6d0s2
c6t7d0s2
I read about building your storage using dynamically striped RAID-Z
groups of (Y / X) devices. Sounds a little complicated to me.
So before I go in and make a config that I can't easily back out of
(grin), I thought someone out there might have some advice/tips about
the zpool config I build.
Another side issue is that we've been using rsync to replicate our
nightly data (runs hourly or so). We're going to back this out and
start sending ZFS snapshots to our remote thumper (similar system),
but I've read that there are problems maintaining metadata like NFS
handles and such, which must be in order on the remote system (it's a
failover). Anyone know the currents status of that issue?
Thanks in advance!