I am such a Newb and I apologize in advance for I know not what I do…or say.
Config: I bought the components from a coworker and added the storage.
Proxmox VE 7.2-3
CPU: AMD Ryzen Threadripper 2990WX 32-Core Processor
MB: ASROCK X399
Storage: 1x 4TB Spinning Rust for Proxmox and guest VMs labeled /sda
3x 12TB Spinning Rust TrueNAS Storage
/sdb & /sdc are passthrough to TrueNAS.
/sdd is a recent add to move vms to while expanding /sda, then will be passed through to TrueNAS.
Problem: Local (home) storage is maxed out at a little over 100 gigs and it’s a 4TB drive. I had Local-LVM which was the remainder of the 4TB drive that I deleted in an attempt to expand the Local home with the following command " lvresize -l +100%FREE /dev/pve/root " without the quotes of course. It gave me about 12 gigs back. I did have a few VMs running on it that I migrated over to my large pool before attempting the expansion. There was one snapshot that was locked on the Local-LVM that I deleted after editing the .conf and removing the locked snapshot line. Ran the command again and same same.
Task: Transfer all data from 4TB spinning rust to a new 4TB M.2 and utilize the full 4TB. Then reutilize the 4TB Spinning rust as a mirror for the M.2.
So have I made a complete mess of all this and I’m better off wiping it all and starting from scratch? Or is there a way to accomplish #4 Task and avoid starting over?I have screenshots but I am new here and not sure how to attach them. I appreciate everyone’s help and apologize if I should have looked elsewhere in the forum first. I just didn’t know where to start. My vocabulary is lacking. This is my home lab I built in an effort to learn through experience. Assuming I make it past this problem, my next goals are migrating TrueNAS Core to Scale, install docker and containers for various things, and miror all data to an offsite TruNAS Scale system I built and connected with TailScale.
There most certainly is a way to transfer the data. You want rsync for this. But honestly, if you’re not far along, redo the installation and put Proxmox on ZFS so you don’t have to mess with LVM. Then you can use rsync to copy the data over from the spinning disk.
You don’t want to mirror an SSD and a rust disk, generally speaking. The resulting vdev/pool behaves as though it were made entirely of rust if you do that.
What you want is to regularly back your SSD up to the rust, with the rust set up as a separate pool… And I’d strongly advise that (at least) your SSD pool be redundant as well. If you don’t have two m.2 slots on your board, SATA SSDs work just fine.
Thank you for the feedback. This feels like one of those situations where words really matter and I obviously don’t know how to use them lol. When I say mirror, I meant copy file for file one to another thinking it could be mirrored back to the NVME if something went wrong. I wouldn’t have thought there would be behavioral differences between spinning rust vs nvme by doing a mirror. I don’t know what I don’t know and I like learning new things so thank you for that. I will follow this guidance and backup the nvme to the rust with the rust being a separate pool. Now to learn more on how to do that.
On that note there are lots of tutorials on how to set up Proxmox from scratch on the interwebs. Knowing what I am working with and what I am trying to accomplish in this post, are there any tutorials that would be recommended over the other?
Here is the tutorial I am considering. @ 16:10 he gets into deleting local-lvm and expands local to use the full drive. Then it looks like he edits the “contents” of local to support ISO, Disk Image, containers, etc. Is this a bad idea? If so…why? If there are written docs on why/how to do it differently, please feel free to show me the way.
Thanks again for the help.
Well this journey is a difficult one. I put the 4TB M.2 stick in and now my NICs won’t connect or even light up the switch. Going to make a new post for that one with screenshots. Thanks for the help thus far.