From the documentation:
This argument tells syncoid to set the recordsize on the target before writing any data to it matching the one set on the replication src. This only applies to initial sends.
Does that mean if I wanted to change the recordsize of the files in a dataset, I could make a new dataset with the desired recordsize (lets say going from 1M to 256K) and then syncoid tank/olddataset tank/newdataset
would result with all the data on newdataset using 256 recordsize while maintaining snapshots?
Does that mean if I wanted to change the recordsize of the files in a dataset, I could make a new dataset with the desired recordsize (lets say going from 1M to 256K) and then syncoid tank/olddataset tank/newdataset
would result with all the data on newdataset using 256 recordsize while maintaining snapshots?
No, it doesn’t. Replication won’t rewrite existing records/blocks, regardless of the settings on the target.
In your hypothetical operation above, what you’d actually do would be create a dataset on the target with recordsize=256K, then stuff it full of 1M records. Files you created locally would have 256K blocks afterward, but every time you replicate in, you’re replicating 1MiB blocks in, so you write 1MiB blocks.
Essentially, there’s very little point in --preserve-recordsize unless you’re in an oddball situation where you’re both likely to swap which node is production and which node is backup, and your workload involves lots of discrete files with no random access inside (because once a file is created with a certain blocksize, it maintains that blocksize, regardless of what the recordsize
parameter on the containing dataset says).
That leaves zvol
s. Except not really, because volblocksize
is immutable, so volblocksize
on the target will be the same as volblocksize
on the source whether you like it or not.