Upgrade path for a fileserver/NAS

I am running a TrueNAS Scale for my fileserver needs. It is an old desktop machine with the following configuration:

ASUS Z97 Motherboard

  • 1 m.2 slot + 6 SATA ports (not sure all 7 can be used at the same time)
  • Realtek onboard 1000base-T NIC
    Intel i5-4440
    12 GBs of RAM (2 x 2 GB + 2 x 4 GB)
    500 GB SSD Bootdisk
    ZPOOL:
    fatagnus
  • mirror-0
    • 2 x 3TB 3.5" HDD
  • mirror-1
    • 2 x 4TB 3.5" HDD

Datasets:

  • home
  • isos
  • media
  • vmdata

The pool is currently at 95.3 % utilization (I know - even TrueNAS is complaining)

I have two proxmox boxes running currently only LXD containers but they will also be running VMs in the future. Storage for said containers and VMs are/will be this TrueNAS box.

What would be a good upgrade path here, both with regard to hardware and pool design?

With sound advice from @mercenary_sysadmin in another thread I have an LSI 9300-8i HBA on order - it should hit my doorstep on in a few days.

I have a 4 TB 2.5" HDD and an 8 TB 3.5" HDD. The 4 TB 2.5" is most likely an SMR-drive so might give performance issue mixed with non-SMR drives I guess?

Upgrade step 1:
When the HBA arrives: install it

My plan is to get a (used) 8+ TB drive to setup in a mirror with the 8 TB drive I have already and copy all the data from the current ZPool over. I am debating whether to do a ZFS Send/Receive locally versus creating the datasets on the destination pool and rsync the files. The first is faster and easier both preparation and execution wise and will preserve snapshots - the latter will be slower, will loose snapshots but will allow me to create sub-datasets. This zpool will - no matter the path - be a temporary placeholder pool for the data. The 8TB HDD and the 8+ TB HDD will be connected to HBA - the remaining drives connected to on-board SATA controller.

Step 2:
Move the 4 3.5" HDDs to the HBA together with the 4 TB 2.5" HDD and possibly a used 6+ TB 3.5" HDD and set them up in a RaidZ 2 pool. Move the data from the temporary pool to the new RaidZ2 pool.

Dependant upon whether I can get a 6+ TB used for a reasonable price, the new pool with be either a 5 disk RaidZ2 pool with 9 TBs of storage or a 6 disk RaidZ pool with 12 TBs of storage.

Step 3:
If getting a 6+ TB HDD:
Replace the smallest drive in the pool (3TB) with the smallest of the 6+TB or 8 TB drive and let resilver. Then replace the remaining 3 TB drive with the other drive and let resilver. finally autoexpand to provide now 12 TBs of storage. Final configuration: 1x 8 TB + 1x 6+ TB + 2x 4TB 3.5" + 1x 4TB 2.5" + 1x 3 TB

If not getting a 6+ TB HDD:
RaidZ2 pool is currently a 5 disk array - extend it with the 8 TB drive to provide 12 TBs of storage. Final configuration: 1x 8 TB + 2x 4TB 3.5" + 1x 4TB 2.5" + 2x 3 TB.

How can I improve my setup from here, apart from adding more TBs of storage through bigger drives? From what I read, L2ARC run from one or more SSDs might not add any speed? Adding a Metadata/smallblocks VDEV would require at least a 2-way mirror, as loosing it would mean loosing the pool.

For me 3 main areas would be of interest, currently:

  1. Add more RAM to allow for more ARC space
  2. Set up an SSD based, mirrored zpool for vmdata
  3. Add NICs to increase network bandwidth

Ad 1) More RAM
This Mainboard is upgradable to 4 x 8 GB DDR3 - beyond that needs a newer generation CPU/Mainboard that likely would also bring more PCIe lanes and faster memory (DDR4/DDR5) and PCIe lane speed (PCIe 4.0/PCIe 5.0). And with the right CPU same or maybe even lower power draw.

Ad 2) SSD based pool for vmdata
This will probably speed up both IOps and throughput - may be limited practically by network bandwidth

Ad 3) Add NICs
I am planning to add dual SFP+ NICs to both the TrueNAS machine and one of the Proxmox boxes. The more powerful of the two Proxmox boxes is a Lenovo Tiny, so SFP+/ 10GBit Base-T is out of reach, but an additional 2.5 Base-T might be possible through an A+E key M.2 slot.

Currently I only have Gbit networking, but a DAC SFP+ cable from TrueNAS to one Proxmox and an SFP+ RJ-45 transciever and the above mentioned M.2 2.5 Base-T NIC would be a temporary solution, before getting something like the Mikrotrik CRS309-1G-8S+IN switch as the core switch.


I dont have any ZFS tweaking/tuning experience, could anybody advice on that as well as the pros and cons of the above, perhaps with a prioritised list of upgrades?

The biggest performance bump you can possibly get here is hyperconvergence: ditch the separate TrueNAS/Proxmox setup and store Proxmox’s VMs directly on the same machine, eliminating the gigabit throughput and latency bottlenecks that are currently hampering your VMs.

Essentially, i’m recommending that you do go ahead and upgrade that CPU/mobo/RAM, and you do so with enough firepower to handle both storage and virtualization. Doing so will speed up your VMs immensely.

1 Like

Would you recommend achieving this by virtualizing TrueNAS or importing the zpool and managing via the proxmox host?

1 Like

There’s no one easy, honest pat answer to that.

For some people, the answer is to run TrueNAS on its own box with its own storage, and run the VMs on the Proxmox box with its own storage.

For some other people, the answer will be to virtualize TrueNAS beneath Proxmox. There are some potential performance dragons there, though, in the nested ZFS. ESPECIALLY nested beneath the way Proxmox will set up your VMs, if you aren’t knowledgeable enough to make it do better. (Mostly, this means you want larger volblocksize. But it also means not setting up your pool badly for small block random I/O in the first place, and Proxmox does absolutely nothing to educate anyone about that that I’ve seen, so…)

For somebody else, the answer might be to ditch Proxmox entirely, use TrueNAS Scale instead of TrueNAS Core, and just run VMs and storage alike on Scale. I get the impression that’s not a very popular way to do it right now, and I get the impression that’s due to Scale having some bugs, but I’m not speaking authoritatively on anything other than “that’s not very popular right now.”

1 Like

Citatblok @mercenary_sysadmin
Essentially, i’m recommending that you do go ahead and upgrade that CPU/mobo/RAM, and you do so with enough firepower to handle both storage and virtualization. Doing so will speed up your VMs immensely.

Absolutely! My problem is that I do not have the budget for it at the moment. I did a quick “buy” at one of the Danish online stores. A Z790 chipset Mainboard with 32 GB DDR5 (2x 16GB) and an i5 12400 would cost me somewhere close to USD 500. The HBA with cables already has me down USD 150 and getting one or two used drives will probably sink me another USD 125-150. But yes, in the future, that is definately something I am looking into.

And actually I will put it ahead of the 10 Gbit network upgrade as well :slight_smile:

1 Like

Citatblok @charles
Would you recommend achieving this by virtualizing TrueNAS or importing the zpool and managing via the proxmox host?

As I am using the fileserver directly to other machines beyond just the containers and VMs, I will virtualize the TrueNAS server, utilising NVMes and/or SATA SSDs for the TrueNAS VM boot disk and then patch in the HBA PCIe card into the TrueNAS VM. Storage disks for VMs and containers would then be added either as NFS or ZVOL/iSCSI from TrueNAS, depending on the data.

The trick will be how to divide the RAM in order to maximize performance. I guess most of the RAM would end up for the TrueNAS VM to provide a good ARC.

I’ll throw out another suggestion- ditch TrueNAS entirely.

I had wanted a hyperconverged and looked at using both Scale or Proxmox alone. For my use case, I wanted/needed VMs/LXCs way more and all that that entails than I needed the pretty GUI for ZFS. And IMO Scale’s VM GUI is not even good but honestly bad.

I’ve happily run Proxmox with ZFS just fine for years (thanks Jim for sanoid/syncoid!) with TrueNAS Scale as a backup box for it all (and then so on and so forth to the cloud with encrypted backups, yada yada yada.)

It’s a big jump from how Scale holds your hand with pool/storage management but for me at least it was the right one.

2 Likes

I may get there in the future :slight_smile: For the time being, I am sticking with the seperate Scale box for now. It is a bit too old to run as hyperconverged, mainly because of the 32 GB RAM limit. The LSI-9300 is arriving tomorrow - as I have the box cracked open anyway, I will swap in two 8GB sticks for the two 2 GB sticks, doubling the RAM, with a tweak almost tripling the memory for the ARC.

I can get an 8th gen i3/i5 based HP EliteDesk 800 SFF for about USD 200, but it wont fit the disks and I am not sure that I can transplant the Mainboard as it is not a standard board/psu. NewEgg/Craigslist is not a viable option (AFAIK) in Denmark.

From my initial look at Proxmox and ZFS it seemed that it was up hill up hill setting up and managing ZFS pools and Datasets and shares. I could probably set it up on the CLI, but it is also down to day-to-day admin. I have yet to take a good look at IaC like Terraform.

1 Like

So I received the HBA - it is based on a Broadcom 3008 chip - not sure if that was the original chip of the LSI 9300-8i or the successor, but I am looking forward to putting it to use. Unfortunately I did not check which plugs are on this HBA so I ordered the wrong cables to connect the drives to the HBA. If anyone needs SFF-8087 to SFF-8482, I have 2 such cables :slight_smile: The replacement cables should arrive on my door step this coming Tuesday.

I am going to swap 2 x 8GB RAM modules for the two 2 GB modules doubling the total RAM size should give ARC a boost. I am almost tempted to try hyperconvergence running proxmox on the box with TrueNAS Scale as a VM under it, but I am not sure if the hyperconvergence is worth it on this old 4th gen i5 vs the Ryzen 5 PRO 3400G it is running on now. The two current running containers (Nextcloud + Jellyfin) at their current setting would eat up the extra RAM, but would not be limited by network speed… Still debating that path internally …

So on to the pool design:
My plan is to move the storage pool to a 6 drive RAIDZ2 - currently with the HDDs I have on hand that would end up being populated with 8+4+4+4+3+3 TB so a total of about 12 TBs. I may get another 8 TB HDD that would replace one of the 3 TBs - that won’t increase pool size, but would bring me one step closer to 16 TB storage. And would free up a 3 TB for my backup server.

Additionally I would like a 2 disk mirror for container/VM storage. I need to take a closer look at which kind and sizes of SSDs I actually have. Ideally this mirror would be 1+ TB SSDs, but that wont happen right now. If I have 2 x 480+ GB SSDs, I might go that route directly. If not I will setup a 2 x 1 TB HDD mirror instead. It should still be faster than the RAIDZ2. Would it be possible to transition from the 2 x HDDs to 2 x SSDs one SSD at a time? And would it affect the efficiency of this mirror when transitioned fully to SSDs? Or to put it in a more clear way: Does ZFS setup a 2 way mirror differently when doing so with 2 x SSDs vs 2 x HDDs? And if so, is this a permanent different behavour, or will it change when the last HDD drive leaves the pool?

Furthermore I plan to host support vdevs to the pool(s) on the onboard SATA ports as I have spent all 8 drive ports on the HBA with the above. Would it be better to move the vmdata pool to the onboard SATA ports and put an L2ARC SSD on the HBA instead?

And with regards to the two “channels” would it be better to split the two pool’s drives evenly on the two Mini-SAS splitters so that each has 3 storage pool drives plus 1 vmdata pool drive or does that not matter?

Sorry for such a lengthy post on a late Friday evening :sunglasses: