My ssd is < 50% full. Sounds like my understanding was correct, no difference between the two, but with truncate, it will allow the controller to select nand cells for wear leveling at least once, on initial write. Sounds like they won’t be deallocated from that file if the file contents shrink.
Yes, eventually I was planning on having root on ZFS, across the whole disk (minus ESP). I practiced on my friend’s laptop when he had to reinstall. Made his be my guinea pig
. But if an ext4 primary is good enough for you, it is probably good enough for me as well, haha. That is what I have on my server right now, main OS on an ext4 ssd, and a 2x hdd disk mirror for all my files. I will add, the distro I use (not ubuntu) has played very nicely with zfs, and it also worked without hardly a hiccup for root on zfs, so the factor of linux & zfs struggling to get along is not a concern for me.
I have had a laptop ssd fail before. Luckily I was able to mount it read-only one time, and clone it with fsarchiver. The next time I tried to mount it, the controller wouldn’t even register as existing at all when I plugged it into a different host via an enclosure. Simply to save the time of re-install and setup, that experience convinced me in the future that I would prefer root on zfs, just grab the whole OS with a send | receive and get back to work. I do keep a dotfiles repo, and if I cleaned it up and wrote an installation script, then the time of reinstall to an ext4 would be reduced. But then, if I don’t make a small partition for the OS, then the controller can wear-level more effectively, instead of splitting the wear leveling area in half (from what I understand…). So, I see pros and cons for both, root on ZFS, and your method of root on a small ext4, with a data partition dedicated to ZFS.
For now, I don’t have time to reinstall, but I will be getting a new laptop in a couple months, and will be forced to make the decision at that point. But, either way, it will be ZFS on its own, large partition at a minimum. 