I have a dataset to store actual linux installation media.
From what I understand, if I have strictly large files in a dataset, I should set recordsize to a large value. But with a lot of small files, it should be set to a smaller value. Am I correct so far?
So my question is, does the system see a linux installation .iso image as a single large file, or does it see it as the many files that are inside the iso 9660 file system?
As I see it, the way the system sees that image file would determine best recordsize to set.
Yes, you are right. For - just - big files like iso’s etc use recordsize=1M and for small files or for a mix of big and small files use default of 128k (not less and not more !!), always use compression (eg lz4).
An iso file is 1 large file in a filesystem and you just see the files inside when mount by loop device.
Not quite. You want a large recordsize unless you have small block random I/O inside larger files, in which case you want a recordsize not too much larger than the expected random I/O block size.
You do not need to tune recordsize for small files, nor would it help to do so. Small files are stored in small blocks regardless of the recordsize property.