@foobacca Just in case you haven’t solved this, and for anyone else reading this thread - I managed to get the containerized build with ssh working thanks to a little help from ZBM developer AHesford:
The steps are:
sudo su
apt install -y podman
mkdir -p /root/zfsbootmenu/dropbear/
cd /root/.ssh/
ssh-keygen -t ed25519 -f remote-zbm
cp remote-zbm.pub /root/zfsbootmenu/dropbear/authorized_keys
cd /root/zfsbootmenu/
curl -O https://raw.githubusercontent.com/zbm-dev/zfsbootmenu/refs/heads/master/zbm-builder.sh
curl -O https://raw.githubusercontent.com/zbm-dev/zfsbootmenu/refs/heads/master/contrib/remote-ssh-build.sh
chmod a+x *.sh
./remote-ssh-build.sh -- -u
That creates a file
./build/vmlinuz.EFI
The EFI works as a regular console based ZBM, as well as starting a dropbear server on port 222. It seems to use DHCP to get it’s IP address, so you will either need to set a static lease or have some way of introspecting the IP it got (e.g. from your router). You will need the remote-zbm key generated above to log in:
ssh -i /root/.ssh/remote-zbm -p 222 root@10.4.6.108
This presents you with a prompt:
zfsbootmenu ~ >
THIS IS NOT ZFSBOOTMENU - this is a shell on a machine named zfsbootmenu. To start zfsbootmenu, just type zfsbootmenu and press enter:
zfsbootmenu ~ > zfsbootmenu
From there it works like a normal ZBM, except that as soon as you boot a dataset, the SSH connection is dropped (and hangs) as kexec hands off to the new kernel - so you don’t get any subsequent startup log or any way of seeing startup errors that could cause the machine not to boot. I’m not sure of the best way around this - I guess one could remote power cycle the machine, go back into ZBM, load the zpool and chroot into it and look for log / journal messages?