I’ve got a single, giant dataset, EvacData living on my pool. It contains all the data I transferred off my previous, dying, ext4-based NAS.
Since EvacData is a single dataset containing multiple folders whose contents I’d like to move into their own datasets, I don’t think snapshots/replication will avail me here, so I need to do this the hard(er) way. (Unless I’m missing something.)
I’d like to create new top-level datasets on my pool and transfer the contents of those folders into them, and then once everything is properly organized into datasets, delete EvacData.
I’m not yet very used to manipulating ZFS dataset contents on the CLI, so I’m not sure what the safest thing to do is here. Here’s the result I’d like to achieve.
- Assumption: I have already created the new datasets, and they have the correct group/owner/permissions assigned.
- Transfer contents of, e.g., FolderOne in EvacDataset to NewDataset1.
- Make sure that after the transfer, the contents of NewDataset1 have ownership and permissions matching NewDataset1.
I think rsync is the right tool for this, but I’m not sure of the proper combination of switches. Usually, I’d use rsync -a
to preserve the metadata and permissions of the files I’m moving, but given that EvacDataset and, e.g., NewDataset1 will have different owner/group/permissions setups, that doesn’t seem the right approach.
I know that if I don’t use the -a
flag I’ll lose the original creation dates on the content I’m moving, but if that’s the price to do it right, that’s fine.
So, do I just rsync the data I want to move without the -a
flag, or is there anything I’m missing?