Upgrading my pool but i dont know what raid level i should go with

long story short i have 3 drives in my pool running in a mirror and today i’m picking up a new drive with the same capacity (4TB), and i want to move my pool from a mirror to raid but im not sure what raid level i should go with? i will need the new drive to transfer data to and from, will i be able expand my raid after to also include it?

Holy cow! I just reread your question and somehow (in my morning caffeine deficit) I thought you were asking how to do this.

To answer your question… What level of risk are you willing to expose your data to? How diligent will you be monitoring the drive and pool status? I recall a famous Internet “expert” who didn’t and wound up losing a lot of data – or something like that. I don’t follow him. I like RAIDZ2 for stuff I don’t want to lose because up to two drives can fail and I don’t lose anything. If you have solid backups, you may be comfortable with RAIDZ1. (And I want to emphasize that RAID is not a backup - there are ways you can lose the entire pool at once.) My RAIDZ2 pool gets a local copy and the most important stuff is sent to a remote backup.

My strategy for increasing the size of a pool is to replace drives one at a time with larger drives. When all drives have been replaced with larger drives, you can expand the pool. I do this over a period of years. My RAIDZ2 pool based on 4TB drives now has 2x 6TB drives and 3x 4TB drives. It’s at about 50% capacity and I replace a drive once or twice a year. I can accelerate that if needed. I have two 6TB HDDs on the shelf. I might have another squirreled away.

There is a relatively new feature added to ZFS to support addition of a drive to a RAIDZ pool but I consider it experimental until others have proved that it’s bullet proof. The “replace with a larger drive” is bullet proof.

What RAID level do you want to support? Do you have other backups of the data? Assuming RaidZ2 and no, here’s how I would proceed.

  1. Perform a scrub. Make sure everything is working at 100%.
  2. Disconnect one of the drives from your mirror. You still have redundancy.
  3. Physically disconnect the remaining two drives to prevent an accidental operation that renders it unusable.
  4. Create a degraded RAIDZ2 with the new drive and the drive you removed in step 1. You can use sparse files for the third and fourth drives and (disconnect) them immediately. zpool status should report that the pool is operating in a degraded mode, but it should still be operating. Use a different name for the new pool than the original. You can rename it later.
  5. Reconnect the drives for the original pool and import if that is not automatic.
  6. Use ZFS send/receive to copy the old pool to the new pool.
  7. Scrub the new pool. The next step will leave you without redundancy in your pools though you should have one pool that is redundant with the other.
  8. Physically disconnect one of the old drives from the old pool. This serves as your last chance backup should an error make the new pool unusable.
  9. Replace one of the file based drives with the other drive from the old pool. You may have to use wipefs to remove the ZFS formatting info from this drive. BE ABSOLUTELY CERTAIN YOU DO THIS ON THE CORRECT DRIVE.
  10. When the added drive has completely resilvered, ZFS should run a scrub. Wait for this to complete, check for errors and then repeat the process with the last drive.
  11. Profit! Ore bask in the glow of a successful job completed w/out ever sacrificing redundancy.
  • Be sure to keep notes recording each and every command you enter. If something you don’t expect happens, these notes will help to figure out what went wrong and how to fix.
  • Study the zpool commands to be sure you know which ones you need to use. There are “wrong” ways to do this that will not get you where you want. Feel free to post the commands here before you execute them for a cross check. You don’t want to be in a hurry to get this done.
  • If you have to tweak a command to get it to work, be sure to tweak the prototype in your notes before executing and proceeding. Or your notes will not reflect what you did. DAMHIK
  • If you are planning a RAIDZ1, these instructions can be tweaked to accomplish that, but you will need to start with a 3 drive “new” RAID leaving you temporarily w/out redundancy, Depending on your aversion to risk, you might want to buy another drive so you can retain redundancy through the process.

Got a little long! One of my favorite subjects and as they say… Don’t get him started!

1 Like

before i start i have a question about step 3. you said

how do i create those files?

According to my notes I used

truncate -s 4TB /tmp/fakedisk1

Then

root@oak:~# zpool create -f -o ashift=12 \
>     -O acltype=posixacl -O canmount=off -O compression=lz4 \
>     -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
>     -O mountpoint=/ -R /tank tank raidz2 \
>         /dev/disk/by-id/wwn-0x5000cca24cd16ba9 \
>         /dev/disk/by-id/wwn-0x50000397cb70574d \
>         /dev/disk/by-id/wwn-0x50000397cbb838ff \
>         /dev/disk/by-id/wwn-0x5000cca24cc98d24 \
>         /tmp/fakedisk1
root@oak:~# zpool offline tank /tmp/fakedisk1
root@oak:~# zpool status tank
  pool: tank
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
  scan: none requested
config:

    NAME                        STATE     READ WRITE CKSUM
    tank                        DEGRADED     0     0     0
      raidz2-0                  DEGRADED     0     0     0
        wwn-0x5000cca24cd16ba9  ONLINE       0     0     0
        wwn-0x50000397cb70574d  ONLINE       0     0     0
        wwn-0x50000397cbb838ff  ONLINE       0     0     0
        wwn-0x5000cca24cc98d24  ONLINE       0     0     0
        /tmp/fakedisk1          OFFLINE      0     0     0

errors: No known data errors
root@oak:~#

i was busy for the last couple of days so i havent got around to it until now. is there a way to copy all of the settings of the old mirror pool to the new raidz2 pool or just to list them to make it easier to copy them on my new pool? as i have no experience with the zpool create command i tried to follow the some of the notes you attached to see what i needed in my pool but there where a couple of things i didnt understand what they do and some that i dont what i should set to.

i havent got around to it until now.

This is something you don’t want to rush, Do your research, plan carefully then execute.

is there a way to copy all of the settings

zpool get all
zfs get all

There are a lot of properties and for the most part, the defaults are good. The one you want to get right is ashift which sets the block size. Many drives will report sector size as “512B logical, 4096B physical” and you want to set ashift=12 to accommodate the 4K physical sectors. Once this is set at pool creation, it cannot be changed.

As for the rest of the settings, I have generally followed the recommendations provided with the Debian ZFS on root instructions found at https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html#step-2-disk-formatting. The folk who wrote this up have put more thought into the settings than me so I use that as a starting point. There is also an explanation of the rationale for their choices that I find helpful.

TL;DR I’ve never tried to copy all settings but to only set what made sense at the time of pool or filesystem (zpool create or zfs create) creation.

Don’t hesitate to keep asking until you feel you understand what you need to know.

1 Like

i tried reading the openzfs documentation for zfs send/receive and i got really confused. can you please give me an example of the command you used

What I did when I wasn’t sure how something would work (and to work out the kinks) was to run small scale experiments. You can create pools from disk files and perform operations with those. I enshrined some of these on GitHub at https://github.com/HankB/Fun-with-ZFS. I highly recommend you work through some of these to make sure you are familiar with the commands and understand what is happening. I’d suggest running the commands used in the scripts by hand, one at a time, to see what each does rather than just blasting through the script.

The bulk-transfer exercise comes closest to what (I think) you are planning. I would also suggest exploring this with using syncoid. Since I discovered syncoid I almost never use the ZFS send/recv commands directly.

One thing you could do is to fork that repo and get bulk-transfer working with syncoid. I’d consider a pull request for that. (But please use another exercise like bulk-transfer-syncoid in order to preserve the original.)

Working with a couple file based pools that you throw away when you’re done is a lot less risky then experimenting on your full pools. And the operations complete a lot faster, giving you immediate feedback.

i read about syncoid and its exactly what i wanted. but when i tried to use it complained about a perl module called tiny.pm not being installed and on in issue on the sanoid github a user said to install libcapture-tiny-perl from the ubuntu repositories but apt is disabled on turenas, how can i install this module on truenas?

i wrote this in a rush so i forgot that perl probably has a module manager…

new issue syncoid needs perl modules and that required make and more things to be installed and they arent to i will use zsf send/receive

p.s when i have free time i will build a small test bench with extra hardware and experiment more with freebsd, zfs and syncoid and try to make a pull request but im not super familiar with shell (or programing in general) so it might take some time or not happen at all but ill try my best

1 Like

I’m going to guess that you’re using the BSD variant of Truenas which I’m not familiar with. You’ll need to search for instructions for how to install that for your OS.

im am using the linux version (scale 24.04.2). like added later i was in a rush when i wrote the reply, the module is called Capture::Tiny not tiny.pm, tiny.pm is a file included with the module the script was looking for

Have you looked at the install instructions at https://github.com/jimsalterjrs/sanoid/blob/master/INSTALL.md#debianubuntu?

apt/dpkg are disabled on truenas scale.

I have no idea how to install sanoid on Truenas Scale.

its getting late, tomorrow i will ask on the truenas forums if its something that i can so and if not i will can use zfs send/receive. anyway i want to thank you for your help over the like week and half it took me to do this this

You’re welcome - hope I helped.

1 Like

have you tried installing the missing modules using CPAN, inside a Perl shell itself?

https://www.cpan.org/modules/INSTALL.html

yes, but truenas doesnt include make. i tried to build make on the tn system, buts its missing the compilers

This is pretty faint assistance, since I’m not personally a TrueNAS user… but it looks like the correct approach is to do it inside a jail, which you give access to the pool as a whole.

I would absolutely LOVE a step-by-step write-up of how to do this, if you get it working. I know there are a lot of people out there using Sanoid and Syncoid with TrueNAS, but I am positive there are a lot more facing the same issues that you are!

1 Like

to be clear, I don’t think you need to do the whole CPAN thing if you’re doing this inside a jail; you should be able to just pkg install sanoid and be off to the races, once you’ve granted the jail direct access to the pool.