I’ve used truncated files plenty of times, playing with different zfs configurations, commands, etc. to practice and learn. It is a powerful playground.
Eventually, my laptop will have a proper partition for zfs, but I am not able to configure that at the moment. I’m stuck with a single ext4 partition across the whole drive.
I have a “server” at home with a pool of dedicated drives, and I also backup pool for that production pool.
I often take care of family administration tasks (paying bills, purchasing groceries, etc.) on my laptop. I maintain the permanent, root-source-of-truth copy of receipts on the server since it has a backup. But I like to keep them available locally so that I can reference them for returns, or updating my budget entries, etc.
I am not always able to connect to my server when I need access to the files, so nfs/smb doesn’t solve my problem. I have used rsync, but ensuring a file is in both locations feels like a hassle (e.g. --checksum is not default).
git would do the job simply enough, but a repo full of pdf files just for sync seems an abuse of the tool. (Maybe my proposal here is also an abuse of zfs, … partly what I’m asking.)
If I had a truncated file with a production dataset, I know zfs would inherently ensure both sides were the same via forced receive. And bonus, my e.g. paycheck stubs could be encrypted on my traveling laptop.
So my questions:
- is it unsafe / unwise, to use a truncated file as a production pool?
- 1a) what {reasons | mechanisms} cause it to be a bad idea?
- if it were a temporary solution for only 1 small dataset (the use case above, and, I expect to be able to properly configure my laptop within 6 months), does the answer to 1) change?
- anything else I need to watch out for, or consider?
Thanks!
edit: minor formatting, slight sentence clarification