Backup and restore root pool (rpool)

There are similiar topics here, but I guess its only about data, not the system installation.

My SSD recently broke down and its not even recognized anymore by anything (not as external or internal drive). I have a backup of course, one is on a external USB drive. When I installed the system, ubuntu created a bpool and rpool. I backupped the rpool. From what I know, it contains the system installation and all the data.

I have now created a zpool with a HDD and a cache. Now I want to copy the old backup to the newly created system. How can I perform this?

I have read this:
syncoid -r oldpool newpool/oldpool

But will my system then start normally as it is now in “newpool/oldpool”, so in a child dataset instead of the root dataset? Do I just have to make sure its mounted to the root?

I tried just importing the external HDD with the backup of rpool. I can import, but when I try to boot from it, I get many mount errors and the systems does not start.

I don’t have much experience with Ubuntu but have performed similar operations on Debian using the bpool/rpool configuration and the trick (for me) to restore the backup with same pool and dataset layout. IIRC, it was then necessary to use the rescue instructions (https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2022.04%20Root%20on%20ZFS.html#rescuing-using-a-live-cd) to recreate the initramfs and install grub similarly to the initial installation.

This is assuming that you have a copy of bpool also. The system won’t boot without that (unless you’ve converted to ZFSBootMenu, but that requires a copy of the boot pool to set up.)

It may be possible to configure the host to run with an alternate dataset layout and pool name, but that would take knowledge beyond mine to configure and might result in ongoing issues because updates may make assumptions about pool layout.

A simpler solution might be to start with a fresh install on the new drive and restore your personal files from backup.

(Good job, having backups to restore from. Just be careful not to swap source and destination pool names when restoring. DAMHIK.)

Since you did the Ubuntu approach with bpool/rpool, in order to restore, you first need to do a minimal Ubuntu installation which creates a new bpool and rpool. Then, you can restore the contents of your old rpool onto the new one, and you should be good to go.

It’s usually problematic trying to restore directly to the root dataset of a pool, so generally you just want to replicate the dataset containing your actual OS instead. I forget what those datasets look like on zsys (the Ubuntu approach) installs; but eg on ZFSbootmenu like this system:

me@elden:~$ zfs list -r rpool/ROOT
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool/ROOT                    18.8G   854G       96K  none
rpool/ROOT/ubuntu.2023.01.15  18.8G   854G     8.22G  /

I’d just be replicating rpool/ROOT/ubuntu.2023.01.15 onto the new system. The shape of things will be a bit different for zsys–I think I remember it using a randomly-generated identifier name on a parent dataset–but it boils down to essentially the same process.

Here is exactly my problem. How do I perform this?

I was using ZFS bootmenu in the past, just for info. So first installed the classical way with rpool/bpool, when bpool failed, I was using ZFSbootmenu from USB drive.

I’d use syncoid, personally. In the example above, I’d partition a new set of drives, drop the ZBM executable where it needs to go (first partition of the boot drive if you’re not doing redundant boot, or an mdraid1 across the first partition of several drives if you are doing redundant boot), then create my pool, then replicate my backup onto the fresh pool.

If you know how to set up ZBM, you already know about the rest of it, so I guess you’re just asking about the replication side of it. For me, beginning with a backup of the system you see above, that would look like this:

root@liveshell:/# syncoid -r root@backupsystem:rpool/backup/elden/ROOT rpool/ROOT

This would get me both the ROOT parent dataset (which has the ZBM bootable ZFS property) and my actual root dataset (ubuntu.2023.01.15). After replication finishes, you just reboot, the ZBM executable picks up the bootable dataset, and Bob’s your uncle.

Once, I tried to follow exactly this procedure, but zfsbootmenu didn’t recognize the pool to boot from. I had to manually set the properties on rpool/ROOT/debian to get everything running.

At this point, I wonder if there are specific parameters to pass to syncoid when performing the replication or the restoration to ensure that all the properties of the original dataset are copied.

Thanks!

But there is more than just rppol/ROOT, as shown here:

~$ zfs list | grep  wdele4tb
wdele4tb                                                                               1.16T  2.35T       96K  /wdeletb4
wdele4tb/ROOT                                                                          26.8G  2.35T       96K  none
wdele4tb/ROOT/ubuntu_tzf9vc                                                            26.8G  2.35T     4.22G  /wdeletb4
wdele4tb/ROOT/ubuntu_tzf9vc/srv                                                         264K  2.35T       96K  /wdeletb4/srv
wdele4tb/ROOT/ubuntu_tzf9vc/usr                                                         840K  2.35T       96K  /wdeletb4/usr
...
wdele4tb/USERDATA                                                                      61.9G  2.35T       96K  /wdeletb4
wdele4tb/USERDATA/root_6sjt8i                                                          1020K  2.35T      268K  /wdeletb4/root

thats a bit my problem: I am unsure what to sync.

You want to sync ROOT, USERDATA, and everything beneath the two.