Recommendations for a fun setup - Poweredge R630

Just found about the mass protest on the Proxmox subredit. I am happy to bring my post here if it helps.
original post.

I’ve won a R630 8-bays for pennies on an online auction. My plan was running opnsense baremetal, but the server came with tons of cool stuff :slight_smile:

-3x SAS SSD 400GB (12Gbps)
-1x SAS SSD 450GB (12Gbps)
-1x SAS SSD 800GB (12Gbps)

-Dual 10Gbps SFP+
-Dual 1Gbps Ethernet

-PERC H730mini
-IDRAC8 Enterprise…

the whole load…
I am not sure how to make the most off it. I have added a couple 500GB NVME (ironwolf 510) and have a NAS for PXE-boot and storing the bulk of VM data through iSCSI, as well as rsync backups.

Proxmos sounds like a cool option but, after reading through Reddit and other forums, I am finding impossible to decide on the filesystem and architecture, mainly because I do not have any strong requirements or expectations. I would be happy with a system that does not trash drives and is stable (cause I rather not spend a fortune in SAS drives). Capacity is not an issue because of the NAS and performance must be good in any case with this hardware, right?

I had the following in mind:
-2x 400GB in raid0
-1x 400GB hotspare for the raid0
-1x 800GB in raid1 with the raid0
-no clue what to do with the 2x NVMEs and the 450GB SAS SSD

With BTRFS I would use the NVMEs as cache and mirror the volumes but seems like ZFS is the Proxmox way. But ZFS seems picky with drive sizes and I am afraid of shortening the hardware life with some much redundancy…

What would you do? This is a homelab, and the craziest ideas are welcome :slight_smile:

Cheers

UPDATE: NVME specs

What you probably want to do here is a pool of mirrors, which is conceptually somewhat like RAID10.

2x 400GB : there’s one 400GB mirror
1x 400GB + 1x 450GB : there’s another 400GB mirror

And perhaps use the 800GB as a SPARE vdev. This would leave you with a single pool with 800GB usable capacity after redundancy. Since this might not be obvious to you, you could also buy another 800GB drive, in which case you could set up a pool with three mirrors:

2x 400GB
1x 400GB + 1x 450GB
2x 800GB

for a total usable capacity of 1600GB.

What would you do? This is a homelab, and the craziest ideas are welcome

I don’t actually think this would be “crazy” at all, but personally, I’d set it up as a Sanoid server with standalone KVM virtualization, using virt-manager or Cockpit as the tool to manage the VMs. You get considerably more flexibility that way, as well as avoiding some storage performance issues that Proxmox’ preferred setup (which is very difficult to significantly change while still using Proxmox) tends to lead to.

Depending on the NVMe drives, you might be able to use those as CACHE and LOG vdevs (read accelerator, and sync-write accelerator, respectively) but I’d want to know more about what kind of NVMe drives you have. There’s a widespread naive belief that anything NVMe is faster than anything SATA/SAS, but that very, very much depends on both the individual drive on each side of the comparison, and on the workload to be served.

A couple years I lucked into an R420 with 2 Xeons, 32GB ECC RAM and 2x 300GB 15K screamers, probably SAS drives. It sounded like a 747 getting ready to take off until I adjusted the fan profile. I installed Linux on the drives and returned them. (It was a retired host where my son works and I’ve been nudging them in the direction of Linux.)

I replaced the drives with two 8TB HDDS configured in a ZFS mirror. Eventually I added an SSD to use as a boot drive. I needed more than 300GB storage. This is now my remote backup and except for a messed up (unattended) ZFS upgrade, it has been solid.

I suspect that an automatic kernel update stalled on the license warning for ZFS and remained at that prompt until a power failure forced a reboot that could not succeed due to the interrupted update. When I was next on site, I hooked up a terminal and keyboard, completed the upgrade and it came right up. At that time I added the SSD, installed Debian on it and configured a VM (QEMU/KVM) that I could manage remotely via SSH and installed ZFS in the (Debian) guest. I forwarded the two HDDs to the guest so it had direct control.

I have no experience with Proxmox. I’ve used Debian for years (potato? hamm?) and find it pretty solid.

Here the specs for the 400GB SAS SSD:

@mercenary_sysadmin I think this is what you proposed?



Partitions have funny numbering (up to 9), cause I played around and rebuilt it a few times. But the setup was quite straight forward. I installed proxmox on the first mirror (ZFS raid1). I later added the other mirrors from CLI:

zpool add rpool mirror -f /dev/disk/by-id/scsi-35002538a474324f0 /dev/disk/by-id/scsi-358ce38ee214ed069

I added the last 800GB disk alone, until I find a cheap complement for another mirror.
As per your latest post What is a zpool/vdev? you mention that losing a single vdev would ruin the entire pool; hence, is this a terrible idea to add an standalone disk?

For snapshots, Sanoid looks amazing. Can you foresee any issue in installing it alongside Proxmox? Cause Sanoid would operate directly on a block level, right?
Any consideration for Sanoid and encryption?
Cheers :slight_smile: