Increase boot partition size

I have Ubuntu 22.04 root on zfs. It was installed following the directions here which specified a 512M boot partition. This has turned out to be way too small. According to here this partition should be at least 2GB.

root@ubuntuzfs:~# zpool status -L bpool
  pool: bpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:01 with 0 errors on Mon Dec 11 08:00:02 2023

	bpool          ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    nvme0n1p2  ONLINE       0     0     0
	    nvme1n1p2  ONLINE       0     0     0

errors: No known data errors
root@ubuntuzfs:~# zpool status -L rpool
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:01:04 with 0 errors on Mon Dec 11 08:01:05 2023

	rpool          ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    nvme0n1p3  ONLINE       0     0     0
	    nvme1n1p3  ONLINE       0     0     0

errors: No known data errors
root@ubuntuzfs:~# fdisk -l | grep nvme
Disk /dev/nvme0n1: 232.89 GiB, 250059350016 bytes, 488397168 sectors
/dev/nvme0n1p1    2048   1050623   1048576   512M EFI System
/dev/nvme0n1p2 1050624   2099199   1048576   512M Solaris /usr & Apple ZFS
/dev/nvme0n1p3 2099200 488397134 486297935 231.9G Solaris /usr & Apple ZFS
Disk /dev/nvme1n1: 232.89 GiB, 250059350016 bytes, 488397168 sectors
/dev/nvme1n1p1    2048   1050623   1048576   512M EFI System
/dev/nvme1n1p2 1050624   2099199   1048576   512M Solaris /usr & Apple ZFS
/dev/nvme1n1p3 2099200 488397134 486297935 231.9G Solaris /usr & Apple ZFS

Is increasing the size of bpool possible?


bpool   496M   265M   231M        -         -    27%    53%  1.00x    ONLINE  -
rpool   230G  49.3G   181G        -         -    47%    21%  1.00x    ONLINE  -
tank   7.27T  3.57T  3.70T        -         -     7%    49%  1.00x    ONLINE  -
root@ubuntuzfs:~# cat /etc/fstab | grep efi
PARTUUID=7704518c-eb82-4f70-b92e-89ad5cee33ec /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1
root@ubuntuzfs:~# blkid | grep 7704518c-eb82-4f70-b92e-89ad5cee33ec
/dev/nvme0n1p1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="A796-C5DC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="7704518c-eb82-4f70-b92e-89ad5cee33ec"

I really don’t want to re-install the operating system. This is my Nextcloud installation and so much work has gone into getting things right. What about replacing one of the mirrors with a correctly partitioned drive then repeating the process for the other drive?

For increasing mirror pool size: the answer is, yes. However, I believe what you really want to ask if you can fix this in your situation somehow by resizing the partitions on your disks. The answer is: no, not with the data online all the time; what you’d actaully need is can you decrease a pool size, and ZFS can not do that (AFAIK).

You can never (practically) move the start of a partition. With conventional filesystems, you need to have at least 50% of free space, and “juggle” things around (nitpick: possible with even less free space but requires even more “juggling”); 1) by decreasing the partition size, 2) create a new (temporary) partition in the now created free space in the end, 2) move data there, 3) delete original partition and recreate a new (empty) partition at desired start point (to allow increasing size of the partitions towards the beginning), 4) move data to new partition, 5) delete temporary partition and 6) resize. ZFS does not allow decreasing any pool size (as I’ve understood) so this is not possible.

A (full) reinstall should be never really necessary. Just copy (e.g. use snapshot send/recv or, if changing filesystem type, something like rsync) the system somewhere else, create the partitions and pools as you see fit and then restore from the backup. But this may be what you already had in mind, in that case I may just be splitting hairs here, and apologize.

I do have a few questions (these are probably not helping your situation, though, and somewhat sidestepping, but generally valid questions IMHO). I may be missing something since I am certainly not familiar with all use cases.

The first link specifies a 1GB boot pool size, not 512MiB. So it seems you have deviated from the installation instruction in this regard. Also, I don’t see why does one actually need a boot pool?

It makes much more sense to just backup /boot instead of using snapshots; it does not / should not include any kind of user/application data, and should be trivial to restore from upstream (or from backups). It’s just overkill to make it into a RAID array. I.e. it just doesn’t benefit from ZFS features?

In case the /boot is zfs in any case, and it resides on the same vdev physical device, why not just make it a subvolume? This would remove any size constraints (but would allow separate snapshot management).

As a sidenote, I don’t really see a reason for a separate /boot partition, as many distributions, such as Ubuntu, want to do even with conventional file systems - as long as the bootloader can see / ; alternatively, just make EFI large enough (it’s FAT32 and practically anything can read it). A separate /boot seems mostly like a relict from the olden BIOS days, when systems might not have been able to ready the whole disk, but it would be really, really difficult to find such a system these days, and even less likely a need arise to put the disks into such system for booting. But when using zfs for /boot, it makes even less sense to make it as a separate pool, as such a legacy system would not be able to read ZFS anyways!?

In conclusion: I’d never make a separate /boot partition these days, as I don’t really see any benefit (just use a regular folder on / or the EFI partition); but YMMV! (I’d like to hear your reasoning for using a separate /boot pool)

As for a proper /boot partition size, in case you want to use one: 2GiB is an overkill, 1GiB should be plenty and for a file server (or any kind of server) and even 512MiB should be more than plenty. They should not be computers which test multitude of different bootloaders and kernel versions (but have a single kernel, simple loader if any and perhaps at max one fail-safe kernel). However; there is no one-size-fits-all solution, but if your system works - just use it and don’t worry.

You don’t have to–at least, not insofar as getting things back the way they were.

First, use ZFS replication (eg syncoid) to repllicate all datasets from both bpool and rpool to another ZFS system. Then, reinstall Ubuntu, but don’t worry about anything beyond getting the partition sizes the way you want them.

Once you’ve reinstalled Ubuntu, boot from your USB thumbdrive that you installed it with, and pop a shell. Now, destroy all the datasets underneath both bpool and rpool, then replace them by replicating your backups back in. Once that finishes, reboot.

Presto: your old system, on different partition sizes.

1 Like