Trouble migrating data to new pool

Hello!

I’m still a beginner with ZFS and file system knowledge in general, butuse ZFS on my media server. I have a pool 4 drives using raidz1. I would like to copy the data from this pool to a new one, using 4 larger drives in the same configuration. I planned to later install 4 more drives and expand the raidz1 array, which as I understand is something that was recently made possible (please let me know if I’m wrong).

When installing the 4 new drives, I broke a connector from one of the first four, so I replaced the drive. After resilvering, two files were permanently corrupted, so I deleted them. Now the status of the pool looks like this:

  pool: zfsa
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub in progress since Fri Mar  7 10:56:01 2025
        7.64T / 11.0T scanned at 559M/s, 6.39T / 11.0T issued at 467M/s
        0B repaired, 57.89% done, 02:53:42 to go
config:

        NAME                                            STATE     READ WRITE CKSUM
        zfsa                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            wwn-0x5000c500d9ec9aef                      ONLINE       0     0 5.43K
            ata-WDC_WD6001F4PZ-49ZWCM0_WD-WX21D1526VCR  ONLINE       0     0 5.43K
            ata-WDC_WD6001F4PZ-49ZWCM0_WD-WX21D15265VS  ONLINE       0     0 5.43K
            ata-WDC_WD6001F4PZ-49ZWCM0_WD-WX21D1526JTD  ONLINE       0     0 5.43K

errors: Permanent errors have been detected in the following files:

        zfsa:<0x8220>
        zfsa:<0x8056>

The permanent errors have not disappeared after many scrubs and clears. The CKSUM values go down after a scrub then climb again. The pool is working fine for playing media and has been for weeks since this started.

To transfer the data, I made a snapshot and used zfs send/recv, but the transfer keeps failing. For the most recent attempt I used this command, sudo zfs send -v -R zfsa@zpool_transfer 2> zfs.log | sudo zfs recv -v -F zfsb 2> zfs.log &, which failed with warning: cannot send 'zfsa@zpool_transfer': insufficient replicas.

I really don’t care about fixing the health of the old pool any more than is needed to move the data to the new pool. I was considering just using rsync to avoid the zfs errors, but I wasn’t sure if there was a reason not to. Any help would be appreciated.

echo 1 > /sys/module/zfs/parameters/zfs_send_corrupt_data, then try again.

This temporarily allows ZFS replication to replace unavailable blocks with garbage blocks (I’m not sure if they’re zero blocks, #DEADBEEF, or what). On the target end, there will be no CKSUM errors–but nothing is actually “repaired” in the sense of restoring the originally lost data.

After rebooting, ZFS will once again stop allowing replication of corrupt data.

1 Like

Awesome, thanks! I’ve started the transfer that parameters set.

1 Like