Building a NAS Pair

Thanks for creating this space mercenary_sysadmin.

In way I feel I’m answer my own questions, that if I have to ask it I probably shouldn’t be doing it, but I’m wondering what TrueNAS offers over… x distro with zfs-utils installed on it? Is there a “go to” distro for rolling your ZFS NAS?

I’m hoping to build a pair of NAS, one for my house and one for a family member, we are both collecting photos due to interest in photography and I vision the use case as each having our own nas but setting them up as a backup destination of one another. My question is: How?

For the purposes of this illustration I’m using 3 drives, 8tb each. I need to brush up on my naming conventions and such here but I imagine I build a zpool in RaidZ1 would use one of them drives for parity and leave me ~16TB of space. On each vdev I would then create data sets, assuming no other usage than each of our photo libraries I could create a “photos” dataset and give it a quota of 8tb and another “backup” data set with a similar quota. This leaves us with 8TB of unique data each, given some redundancy with the vdev and then backup at the end. - Is this correct?

If I’m correct so far, what’s the go to (I’m sure I feel its syncoid or sanoid (which I hear about in a certain podcast a lot)) for taking these snapshots and sharing them? Does the snapshot need to exist locally to send away or is it done entirely to the remote destination?

I see a lot of videos and guides on “how to build a nas” and others saying “raid is not a backup” but I’ve yet to see a turn key whole data solution, if I have missed it I am sorry and would very much appreciate a push in the right direction.


I think you’re on the right path here.

A pair of RAIDZ1 arrays will work fine, though personally, I’d not bother setting up quotas. If one person has 10TB of images, and the other 2TB… why not let the system use the space it already has? If you end up low on disk space you can just start swapping in new, larger disks.

(I should note that an array of spinning disks will not create a fluid photo editing array, certainly not at this number of disks. If you want to be able to edit your photos directly on this array, you’ll want to use SSDs. Loading photos, generating previews, saving edits, etc. will slow you way down. I have tried :roll_eyes:)

Personally, I’d be more descriptive with the vdev and dataset names.

Maybe you live elm street, and your name is dewy.

You family member lives on maple street and their name is goofy.

I’d create server names based on the physical location

Server 1 - Elm
Sever 2 - Maple

I’d then probably create a generic name for the vdev that will make sense in 10 years with feature creep, and call it the same on both… Storage? ZFSPool? I tend to use the array type in smaller systems that can’t expand or change, like RAID10, or RAIDZ1. That being said, changing the name of this portion is not hard, should you need to.

From there I’d make the two data sets, and I’d call them something descriptive: Dewy_Images + Goofy_Images.

Setting the system up this way makes it always really clear to me what I’m doing. If both systems have a ‘photos’ and a ‘backup’ it not obvious which is which. And, if you’re logged into Server1, but think you’re on Server2 … it’s not hard to issue the wrong command and end up doing something you didn’t want.

With these setup, you just need to figure out how often you want to take snapshots (this is the sanoid portion) and how often you want to sync data to the other site (the syncoid portion).

I use syncoid to send the snapshots sanoid makes on my arrays. The only thing to be ‘careful’ with is to make sure the replica datasets are not also taking snapshots. You want your array to snapshot your photos, but you don’t want to make snapshots of the photos from your family member. Their array should do that, and vice versa.

Once you get sanoid setup, and you take a look at the sanoid.conf file you’ll understand a little better how that portion works.

Keep asking questions as you go along and I’m sure plenty of people will be happy to offer some guidance. Each of us used ZFS for the first time at some point…


Yep, all of this.

The major reason to potentially consider a full on NAS distro is if you’ve got Windows clients, and Windows users who expect setting file permissions to work properly when set using the GUI in Windows File Explorer. This is surprisingly non-trivial to get right, and going with a TrueNAS or (my preference) XigmaNAS makes that just work right out of the box.

You can (and I would recommend) install and use sanoid and syncoid on either TrueNAS or XigmaNAS, if you want some of the best part of both worlds.

Personally, I tend to go with bare vanilla Ubuntu on the bare metal, and a XigmaNAS VM running with UFS2 internally, where I need a NAS distro. But that’s because I generally have multiple workloads running on multiple VMs. If your needs are simpler, it makes more sense to just run the ZFS-enabled NAS distro directly on the metal.


Could you elaborate why do you prefer XigmaNAS over TrueNAS?

I know that both TrueNAS Scale 13.x.x and Proxmox VE are based on 8.0 ZFS v2.2.0.

I could not get such info for XigmaNAS (Except some v5000 ZFS featureset)

XigmaNAS is much more barebones and I find its interface considerably more responsive and reliable.

XigmaNAS is much, MUCH closer to vanilla FreeBSD than truenas is. To find the ZFS version used, essentially just look up which version of FreeBSD the current download is based on. It (again, unlike truenas) will be using the standard FreeBSD core version of ZFS for that release.

iX likes to build its own custom ZFS modules. This may give you access to performance improvements early, but it can (and has) also expose you to bugs you’d never experience in a distro build. It can also leave your pool “trapped” in truenas, requiring feature flags not supported anywhere BUT truenas. I’m a lot happier with a standard pool that I can, if I like, export and import directly to and from multiple host operating systems.

Is exporting your pool and importing it into a different operating system something you do every day? Heck no, it’s not even something I do every year. BUT, if you ever have a really ugly failure of some kind, it’s VERY comforting to be able to immediately access your data after booting into a live FreeBSD or Ubuntu environment.