Mountpoints and replicating multiple machines to a single zpool

I have a long-running Debian server (myserver)that’s had everything but the OS moved into a zpool (output of command redacted, file sizes won’t add up):

user1@myserver:~$ zfs list
NAME                     USED  AVAIL     REFER  MOUNTPOINT
tank                    3.02T   505G      104K  /tank
tank/data               1.50T   505G     6.28M  /tank/data
tank/data/home           511G   505G       96K  /tank/data/home
tank/data/home/user1     259G   505G      259G  /home/user1
tank/data/home/user2    30.8M   505G     10.7M  /home/user2
tank/data/vm             321G   505G      200G  /tank/data/vm
tank/ephemeral          51.6G   505G     4.40G  /tank/ephemeral
tank/reserved            700G  1.18T       96K  /tank/reserved

I’m replicating the server to another one but I’m also doing periodic backups to a USB HDD wd202501 which has a single zpool called wd202501 and a dataset called bkmyserver. I formatted the drive and backed up my server. No problems.

I also have a laptop (mylaptop) onto which I recently installed ZFSBootMenu and Ubuntu:

user1@mylaptop:~$ zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
zroot                     375G  1.39T   192K  none
zroot/ROOT               10.8G  1.39T   192K  none
zroot/ROOT/ubuntu        10.8G  1.39T  10.8G  /
zroot/home                364G  1.39T  7.62M  /home
zroot/home/user1          364G  1.39T   344G  /home/user1
zroot/home/user2          952K  1.39T   952K  /home/user2

I would like to also back up my laptop to wd202501, in a dataset called bkmylaptop.

I attached he drive to my laptop and ran zpool import:

user1@mylaptop:~$ sudo zpool import -d /dev/disk/by-id/ata-WDC_REDACTED-REDACTEDREDACTEDREDACTED-part1 wd202501 -N

Before I ran zfs mount -l wd202501/bkwilson I ran zfs list and saw this:

user1@mylaptop:~$ zfs list
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
wd202501                             1.30T  2.21T   104K  /mnt/wd202501
wd202501/bkmyserver                  1.20T  2.21T   200K  /mnt/wd202501/bkmyserver
wd202501/bkmyserver/data             1.20T  2.21T  1.36M  /mnt/wd202501/bkmyserver/data
wd202501/bkmyserver/data/home         496G  2.21T    96K  /tank/data/home
wd202501/bkmyserver/data/home/user1   248G  2.21T   248G  /home/user1
wd202501/bkmyserver/data/home/user2  2.77M  2.21T  2.77M  /home/user2
wd202501/bkmyserver/data/vm           111G  2.21T  67.8G  /mnt/wd202501/bkmyserver/data/vm
wd202501/reserved                     100G  2.31T    96K  /mnt/wd202501/reserved
zroot                                 375G  1.39T   192K  none
zroot/ROOT                           10.8G  1.39T   192K  none
zroot/ROOT/ubuntu                    10.8G  1.39T  10.8G  /
zroot/home                            364G  1.39T  7.62M  /home
zroot/home/user1                      364G  1.39T   344G  /home/user1
zroot/home/user2                      952K  1.39T   952K  /home/user2

Both zroot/home/user1 and wd202504/bkmyserver/data/home/user1 have a mountpoint set to /home/user1. It does not appear that wd202504/bkmyserver/data/home/user1 overwrote zroot/home/user1, but I had not run zfs mount yet.

[The mountpoint for wd202501/bkmyserver/data/home points to /tank/data/home rather than /home but I’m not sure if that’s important for this question.]

My questions:

If I run zfs mount will wd202501/bkmyserver/data/home/user1 clobber zroot/home/user1 on mylaptop?

If I replicate mylaptop’s zroot to wd202501/bkmylaptop, will the two mountpoints conflict with each other? Will they conflict if I replicate mylaptop to my replication server?

Should I just not be doing these mountpoints in the first place?

Where do the mountpoint settings reside? Are they part of the dataset or the zpool?

Is it possible to keep the mountpoints on the source zpool but unset them on the backup zpool? Will they get re-set whenever I replicate a snapshot?

They will not get reset the next time you replicate. I’d recommend zfs inherit mountpoint on the datasets in question on the backup pool. It should stay that way afterwards.

If you inherit the mountpoints but then the original mountpoints come back after the next replication, then you’ll need to stop using the preserve properties flag with zfs send. Generally speaking, there’s not usually much point to using that flag after the first full replication of a dataset anyway.