How to enable remote access (ssh) to zfsbootmenu

tl;dr

I’ve set up a small homeserver (Wyse 5070) with zfsbootmenu on Ubuntu 24.04. I’m now trying to set up remote access so I can boot it without having to plugin a keyboard - instead I want to just ssh to it.

I’ve read the zfsbootmenu docs a couple of times and I feel there’s a couple of bits of missing context in there somewhere. Has anyone successfully done this or can point me at end-to-end step-by-step docs about how to do this?

Full Details

I’m trying to build using the container and the zbm-builder.sh wrapper script from the GitHub - zbm-dev/zfsbootmenu: ZFS Bootloader for root-on-ZFS systems with support for snapshots and native full disk encryption repo. I have:

Then I’ve read the remote access docs but they seem to sometimes be talking about a local install of dracut and sometimes using the container. Also, the container does not seem to have dracut-crypt-ssh installed and I haven’t worked out an option to pass to the helper script that will install that package.

So if anyone can point me at step-by-step instructions that would be lovely :slight_smile:

If not, I’m wondering about:

  • forking the zfsbootmenu repo
  • editing the Dockerfile to install dracut-crypt-ssh
  • building the docker image
  • setting up the various config files
    • this is a bold statement, but I think I’ve got most of it set up
  • running ./zbm-builder.sh -i my_image_name

Does that sound like a reasonable approach? Are there some options I’m missing? Should I just post this as a github issue?

All input welcome. And if I get none I’ll keep plugging away and come back and write up what I did. Then at least future me can find it via google to work out what the hell I did when I actually got it to work. And maybe someone else will find it useful too.

1 Like

You don’t have to build it yourself - you can use one of the EFI images in the Releases directly. The container is to build the image, not actually use zfsbootmenu …

Dracut is used to built the initrd for your system to pair with your selected kernel. Ubuntu defaults to using initramfstools for initrd management, but dracut and drop right into place instead. Dracut is somewhat simpler I think.

Anyway, once you have a zfsbootmenu EFI image, whether you build it yourself or download it, your system needs a way to actually boot it. Generally you put it in a UEFI partition of your disk (type EF00) and your UEFI system bios will find and boot it. From that point zfsbootmenu does its thing, scanning for zfs datasets with kernels etc that it can present for booting.

For remoting in via ssh, you will need Dropbear installed along with some dracut modules. Which means you will likely need to build your own. My own builder (GitHub - Halfwalker/ZFS-root: Set up root-on-zfs using whole disk, with dracut and zfsbootmenu) contains the following for setting it up

mkdir -p /etc/cmdline.d
if [ "${DROPBEAR}" = "y" ] ; then
  echo "------------------------------------------------------------"
  echo " Installing dropbear for remote unlocking"
  echo "------------------------------------------------------------"

  apt-get install --yes dracut-network dropbear-bin
  rm -rf /tmp/dracut-crypt-ssh && mkdir -p /tmp/dracut-crypt-ssh
  cd /tmp/dracut-crypt-ssh && curl -L https://github.com/dracut-crypt-ssh/dracut-crypt-ssh/tarball/master | tar xz --strip=1

  ##comment out references to /helper/ folder from module-setup.sh
  sed -i '/inst \"\$moddir/s/^\(.*\)$/#&/' /tmp/dracut-crypt-ssh/modules/60crypt-ssh/module-setup.sh
  cp -r /tmp/dracut-crypt-ssh/modules/60crypt-ssh /usr/lib/dracut/modules.d

  echo 'install_items+=" /etc/cmdline.d/dracut-network.conf "' >  /etc/zfsbootmenu/dracut.conf.d/dropbear.conf
  echo 'add_dracutmodules+=" crypt-ssh "'                      >> /etc/zfsbootmenu/dracut.conf.d/dropbear.conf
  # Have dracut use main user authorized_keys for access
  echo "dropbear_acl=/home/${USERNAME}/.ssh/authorized_keys"   >> /etc/zfsbootmenu/dracut.conf.d/dropbear.conf

  # With rd.neednet=1 it will fail to boot if no network available
  # This can be a problem with laptops and docking stations, if the dock
  # is not connected (no ethernet) it can fail to boot. Yay dracut.
  # Network really only needed for Dropbear/ssh access unlocking
  # Since we chose to use Dropbear, in this block set neednet=1
  echo 'ip=dhcp rd.neednet=1' > /etc/cmdline.d/dracut-network.conf
else
  # Not using Dropbear, so set neednet=0
  echo 'install_items+=" /etc/cmdline.d/dracut-network.conf "' > /etc/zfsbootmenu/dracut.conf.d/network.conf
  echo 'ip=dhcp rd.neednet=0' > /etc/cmdline.d/dracut-network.conf
fi
2 Likes

Hmm, I think it’s time to check my mental model. Having followed the ubuntu zfsbootmenu with ZFS encryption guide, my understanding is that:

  • zfsbootmenu produces a xyz.EFI file that can be put in /boot/efi/EFI/... which means UEFI boot will work.
  • encrypted zfs root means everything outside of /boot/efi/ will be encrypted - so you need to enter the boot password into zfsbootmenu before you can read anything else.
  • the initramfs is in files named /boot/initrd.img* - so these cannot be read before the encryption password is entered.
  • dracut is a tool used to generate initramfs images.

Given that, if I use dracut on the system itself to create an initramfs that runs dropbear to listen for a password that doesn’t help me - because the initramfs I’ve generated is in /boot/ and so is encrypted and cannot be read until the encryption password has been entered.

Therefore I came to the conclusion that I need to bake dropbear into the xyz.EFI image, by making a custom zfsbootmenu image.

But you saying “you don’t have to build it yourself” makes me suspect that my mental model is incorrect/incomplete.

So if you (or anyone else) could explain where I’ve gone wrong that would be lovely :slight_smile: Just assume I’m a simple sys admin who’s only used ext4 and LUKS and has never delved into the initramfs world before.

(And it’s been a while I know, life got in the way and I’m just picking up this project again.)

zfsbootmenu has the facility to request and use a password to unlock zfs native encryption. I use it now, and it works great. For LUKS you would add a helper script.

Have a look at GitHub - Halfwalker/ZFS-root: Set up root-on-zfs using whole disk, with dracut and zfsbootmenu

My setup handles both zfs native and LUKS encryption, with or without Dropbear.

Thank you again.

I’ve spent a little while looking at the script and it’s a little hard to get the big picture (no criticism when you’ve obviously spent quite a bit of work on what is now a rather long bash script).

A couple of questions about the big picture:

  • is dracut run to create the initramfs image on the host system, or within the zfsbootmenu container?
    • it looks like it’s run on the host, but I wanted to be sure.
  • Can the zfsbootmenu EFI image read the initramfs before you enter the encryption password?
    • I would assume not. But as far as I can tell, the dropbear stuff is included by dracut. And dracut creates the initramfs. So if dropbear is going to run before the encryption password is entered, and dropbear is installed into the initramfs, then the initramfs must be available to zfsbootmenu, and so can’t be encrypted. Is that correct?
    • I’m using zfs native encryption rather than LUKS in case that makes a difference.

Thanks again for your help so far.

Dracut is run on the host to create the main host initrd images. You could use initramfstools or whatever for that, but I found that dracut is pretty nice and modular to include things.

zfsbootmenu can’t read anything in the dataset if it’s encrypted. It notices the encryption, then prompts for a passphrase. Once it has that it can unlock and look inside the dataset to see if there is a /boot directory that contains typical kernel/initrd files.

Dropbear can be included with zfsbootmenu, and it run alongside zfsbootmenu - it is not included in the main system initrd image. That allows the ssh access to enter an encryption passphrase.

Check Remote Access to ZFSBootMenu — ZFSBootMenu 2.2.2 documentation

Did you ever get this working?

Looking at the docs, it seems pretty clear that your mental model is correct, dropbear and keys need to be baked into the EFI image, e.g.

With the above configuration complete, running generate-zbm should produce a ZFSBootMenu image that contains the necessary components to enable an SSH server in your bootloader.

Like yourself, I’m not quite sure how to get the zbm-builder script to include dropbear and the keys?

Also, do I understand correctly from this:

By default, dropbear will generate random host keys for your ZFSBootMenu initramfs. This is undesirable because SSH will complain about unknown keys every time you reboot. If you wish, you can configure it to copy your regular host keys into the image. However, there are two problems with this: 1. The ZFSBootMenu image will generally be installed on a filesystem with no access permissions, allowing anybody to read your private host keys; and…

that there is no real way to securely identify the dropbear ssh host to prevent MITM attacks? Either you get a meaningless random key, or you have an unencrypted key that anyone with access to the server can read?

Haven’t [yet] installed ZBM on a ZFS-on-root system, but speaking about SSH in general:

If the private keys for dropbear are stored unencrypted, you cannot guarantee that no one with physical access have tampered with your system (“evil maid attack”) - But so what? You wouldn’t know that if dropbear autogenerated new keys at every boot either…

However, I would store keys beforehand. Not the host’s actual keys, but a freshly generated set of “boot keys” that aren’t used for anything else. That way, I’m not protected against evil maids, but I am protected against my SSH packets being rerouted to a malicious server (i.e., if packets are rerouted but the server responds using keys I don’t recognize, I won’t trust it).

I suppose you would need to use a TPM or similar to be more sure (not completely sure) that your system remains untouched…

Good point - evil maid is a lot harder to do than simple MITM with unknown keys. And secure remote attestation with TPM seems like a step too far for most requirements:

@foobacca Just in case you haven’t solved this, and for anyone else reading this thread - I managed to get the containerized build with ssh working thanks to a little help from ZBM developer AHesford:

The steps are:

sudo su
apt install -y podman
mkdir -p /root/zfsbootmenu/dropbear/
cd /root/.ssh/
ssh-keygen -t ed25519 -f remote-zbm
cp remote-zbm.pub /root/zfsbootmenu/dropbear/authorized_keys
cd /root/zfsbootmenu/
curl -O https://raw.githubusercontent.com/zbm-dev/zfsbootmenu/refs/heads/master/zbm-builder.sh
curl -O https://raw.githubusercontent.com/zbm-dev/zfsbootmenu/refs/heads/master/contrib/remote-ssh-build.sh
chmod a+x *.sh
./remote-ssh-build.sh -- -u

That creates a file

./build/vmlinuz.EFI

The EFI works as a regular console based ZBM, as well as starting a dropbear server on port 222. It seems to use DHCP to get it’s IP address, so you will either need to set a static lease or have some way of introspecting the IP it got (e.g. from your router). You will need the remote-zbm key generated above to log in:

ssh -i /root/.ssh/remote-zbm -p 222 root@10.4.6.108

This presents you with a prompt:

zfsbootmenu ~ >

THIS IS NOT ZFSBOOTMENU - this is a shell on a machine named zfsbootmenu. To start zfsbootmenu, just type zfsbootmenu and press enter:

zfsbootmenu ~ > zfsbootmenu

From there it works like a normal ZBM, except that as soon as you boot a dataset, the SSH connection is dropped (and hangs) as kexec hands off to the new kernel - so you don’t get any subsequent startup log or any way of seeing startup errors that could cause the machine not to boot. I’m not sure of the best way around this - I guess one could remote power cycle the machine, go back into ZBM, load the zpool and chroot into it and look for log / journal messages?

1 Like