Experience with remote unlock of ZFS encrypted pools

Like everyone, I want to automate as much as I can. Including automated spin up of encrypted pools. Sometimes I have a power outage that lasts longer than my battery backups allow. My servers shut down, and then boot back up, after power is restored. I have native encrypted pools that I would also like to come back up automatically. Using TrueNAS, they do come up automatically, but that’s because the keys are loaded locally on the machine, and wouldn’t prevent anything should the whole server be stolen.

So here is my question/topic for discussion. What solutions has people come up with for this type of issue?

Here is a reddit thread I came across while researching the topic: https://www.reddit.com/r/zfs/comments/w33bss/looking_for_best_practice_for_unlocking_encrypted/

In the thread someone mentioned an NFS share from another computer hidden in there home, and the OP, put a USB stick on the other side of a USB keystone jack, inside the wall. Both really neat ideas.

1 Like

I’ve set up the dataset with a passphrase it unlocks from the key file. I put the password in a file on a machine I boot up.

This is how I enabled this option:

zfs load-key -l keylocation=file:///mnt/machine/zfs.key -o keyformat=passphrase pool_name

-l loads dataset if not already decrypted.

Note: I wanted to keep the passphrase for various reasons, so I chose the passphrase options, but you can generate a raw output instead

more

Then when you run:

zfs mount pool/dataset

It mounts without requesting a passphrase. For years I have manually typed in the passphrase as this machine is only booted for backups, but this method is more convenient and the key is protected on another machine on an encrypted drive.

1 Like

My current approach is to use a key file on the machine with the pool.

# cat my-key.file
<SOME_SECRET_VALUE>

# ls -l my-key.file
-rw------- root root my-key.file

And when creating my datasets:

zfs create -o encryption=aes-256-gcm -o keylocation=file:///root/my-key.file -o keyformat=passphrase tank/data

When I reboot my Debian system, this systemd unit runs:

root@mymachine:~# systemctl cat zfs-load-key.service
# /etc/systemd/system/zfs-load-key.service
[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Loading all of the keys described by my datasets’ keylocation property.

I don’t love this approach, because the key file is sitting on the system. (Owned by root and chown 600, but still…)

I’m in a homelab setup and I lose power a handful of times per year. I have a small UPS, but sometimes it’s not enough and everything shuts down. If this happens when I’m away for vacation, I need the machine to boot and not be waiting for keys.

If I don’t rely on a key file and instead use a passphrase that ZFS prompts for… I run into an issue where at boot, the machine is waiting for me to input that passphrase! I’m not physically present to do so, and I cannot SSH into the machine because it’s not gotten deep enough into the boot as it’s stuck waiting on key passphrases! Darn.

In my case, I’m going to someday build my own PiKVM so that I can access my main machine when I’m away from home, the power goes out, and I need to get into my home network and quickly type in a passphrase to reboot the machine. This way I won’t have to let the passphrase sit on a file on that same machine.

What I am doing for my remote servers is using zfsbootmenu as my bootloader, with dropbear SSH server embedded in it so I can remote in and enter my passphrase over SSH, no KVM needed - and much more responsive than a KVM too.

1 Like