I noticed most distros have updated instructions on OpenZFS site, except Ubuntu’s most recent is 22.04. I’m not an Ubuntu zealot, just more familiar with it. I was just wondering if it had anything to do with the installer. I haven’t tried, but perhaps the 22.04 steps would work for the most part.
I’m giving Debian some consideration here since I need to upgrade anyway.
I’ve got one Ubuntu Noble (24.04) workstation and two Ubuntu Noble servers running under ZBM. The instructions work essentially as-is, just change the references to the version.
After a weekend of planning how I will redo a server using a ZBM install, I came away with two questions, one by accident..
Somehow, my existing non-ZBM zfs install ended up with a borked boot (bpool) after doing a simple apt upgrade on the system. I now boot to a grub prompt. I tried a rollback to a prior state but still the same. I’m not incredibly concerned since I have good backups and my KVM stuff is all on a separate pool. I’m interested in the challenge of figuring it out but don’t have a ton of time to spend on it, since I want to do a whole new install to a more recent release (Noble) and using ZBM. Some of the things I tried was to update grub, and the initramfs, but the latter gives me an os prober error, something about an unsupported feature, I forgot to write it down. I thought a rollback would’ve fixed this, but maybe there was some firmware update that borked my boot.
I did find some instructions on the ZBM site about converting a grub boot to ZBM, but if I’m reading that correctly, you have to have a presently working boot in order for that to succeed..I think.
I think my only question has to do with creating a new ZBM install. The instructions set only covers installing to a single disk. I would like to stick with a mirror VDEV (2 disks). Since ZBM will be handling the booting moving forward, and it appears to all be contained inside the same pool as root, would this be as trivial as just adding the second disk after the installation? That’s like one command. Alternatively, I would need to modify the installation instructions to set up two discs at the same time, similar to how the OpenZFS manual install instructions show.
This smells like the result of doing a zpool upgrade on bpool.
Canonical’s ZFS boot mechanism is very primitive and throws a hairball if it encounters several modern ZFS features, which is why there are two separate pools on a Canonical zfs boot system–bpool is necessary because it has to be antiquated enough for their bootloader to handle it, which leaves the other pool as “where you’ll actually put stuff you care about.”
I went through roughly the same process a few months ago Alfra. I’m not sure from your post if you’re trying to get ZBM to work or just to install ubuntu with a zfs root.
I have been using the ubuntu installer’s zfs option for a few years, but wanted to switch to zbm. I originally tried following Jim’s article, but I got stymied. The ZBM installation instructions (Ubuntu (UEFI) — ZFSBootMenu 3.0.1 documentation) needed a source installation of ZBM for my laptop. I’m not sure if that’s a hardware specific issue or just an ubuntu issue. Then I had a lot of trouble with the debootstrap installation and ended up petering out on my install. Then a month or so later I gave this automated script a try: GitHub - Sithuk/ubuntu-server-zfsbootmenu: Ubuntu zfsbootmenu install script I have a bunch of changes which I haven’t cleaned up or anything, but it does work, has the ability to install ZBM and an ubuntu-server/ubuntu-desktop instead of the bare debootstrap one that the ZBM installation instructions uses, plus its got some setup for zfs encryption. Even if you don’t use the script I feel the source would likely be helpful to see what he’s doing different from the other howto’s.
@xandey thanks for sharing your experience and the process you found. I think I’m going to give the ZBM site instructions a try. I know I want to have a two disk mirror, at least for the primary zpool. I am still deciding what I want to do about the boot partition. I am leaning toward just doing a manual copy of the DISK1 vfat partition to the same on DISK2. Just enough, so that I can switch to the non-broken drive in an emergency to bring up the system. Choosing between this, or an mdadm1 for boot.
I have a (opinionated) method of installing Ubuntu as root-on zfs
For a basic system there are 2 partitions, the UEFI/BOOT one and a ZFS one. You can have multiple root datasets and select which you want at boot time via zbm. Can boot from mirror/raidz multiple disks too, using mdadm to keep the UEFI/BOOT partitions in sync.
No grub (uses rEFInd instead), no separate bpool special snowflake. You don’t HAVE to use rEFInd, can just boot directly into zbm if you want (though I don’t have that as an option, it should be pretty easy to do).
This is the method I’ve used for pretty much all my systems, laptop, encrypted laptop, workstation, media server etc.
It’s easy enough to test in Virtualbox or whatever, and there’s a packer setup for creating a .qcow2 image too if you want.