On linux I have this mirrored pool consisting of 4 disks and decided to destroy it and instead create normal (non-mirrored) pool.
After doing zpool destroy backup and zpool create maintank -f /dev/disk/by-id/{ata-foo1,ata-foo2,ata-foo3,ata-foo4} everything was ok, I created datasets and after reboot everything was back to “backup”. I then tried zpool export backup and zpool import maintank which looked like it worked, because I was able to see maintank and it’s datasets but after 2nd reboot I couldn’t do it.
Hello darukutsu, do again to your new pool “zpool destroy maintank”.
After that cleanup your drives
“for d in sda sdb sdd sde; do dd if=/dev/zero of=/dev/$d bs=1M count=10 ; done”
Then you could create a new zpool like you wish and test by reboot.
But your first decision to a mirror pool was still best as it’s much faster than a raidz pool while not much less space available and even more flexible for future expansion.
It would be helpful to know the distro in use because I think the policies may differ for managing the cache, if that’s the cause of this issue.
For clearing the drive, I’d suggest wipefs because ZFS scatters information across the drive so zeroing out the first 10MB is probably not sufficient.
Partition 9 is what ZFS creates to provide a space cushion when it is handed the entire device for zpool create (vs. creating from partitions.) It’s odd to see it included in pools.
It would also be helpful to see the exact commands you executed (with drive IDs obfuscated as needed.) You should be able to copy them from your notes. If you don’t have notes, you should!
Disk IDs in the form of /dev/sdX are a bit of a red flag as they can change on every boot. I wonder if you can import the drives using something like zpool import -d /dev/disk/by-id/<some-wikdcard> or even listing the specific disks that way. Worth a try.
thanks for the tip, clearing first 10MB truly didn’t help. IIRC i don’t remember doing similar thing when I was on freebsd. (I don’t know why i thought wipefs gonna take hours to finish…) so everything works now even after reboot.
I’m on archlinux 6.9.7-hardened1-1-hardened, this is what I used for creation
You created a dangerous “raid0 like” just striped zpool without any redundancy now, not preferable at all if not using just temporary for any scratch data. PS: On linux sd"x"9 is always there even if unused, it’s in discussion to remove in a feature release, I wonder it doesn’t on other pools you saw.
When you feed OpenZFS raw disks, it always partitions them first and uses partition 1 for the data and partition 9 to reserve a small amount of space at the end of the drive. IIRC, the idea is that small partition at the end “standardizes” drives of a given size which aren’t really–in other words, a Seagate “2TB” drive and a Western Digital “2TB” drive typically aren’t actually quite the same size, and without something like the small part9 at the end to standardize, you wouldn’t be able to replace a larger “2TB” drive with a smaller “2TB” drive in an existing pool.
Anyway, here you can see the same behavior in one of my pools that has some raw drives in it:
BTW, exporting and re-importing a pool created with raw drives can sometimes result in the specifc partition showing in zpool status rather than the raw drivename, but that’s not a problem, just a quirk. Ultimately, the partition is all ZFS is using, whether you fed it “a whole disk” or specified which partition to use yourself.