Should recordsize be divisible by number of vdevs?

I know striping in ZFS is dynamic, and will distribute the data as optimally as possible across the pool.

I’m wondering if setting the recordsize according to the number of vdevs in the pool would be better for performance or avoiding fragmentation?

I remember seeing an example on r/zfs that ZFS will break up a 128K record into 32K chunks when there are 4 vdevs in the pool. But what if the record can’t be divided evenly?

Example: A fresh pool on 3 equal vdevs. For a dataset with large files, would a recordsize of 768K be better than 1M since 768 is divisible by 3?

And how does ZFS distribute the data if the dataset is sent to another pool with 2 or maybe 4 vdevs? 768 is also divisible by those numbers.

1 Like