Hi all, on my laptop I’ve two nvme, one of 2Tb with linux-zfs, and one of 1Tb with freebsd. Also I’ve one external nvme on a case that I’ve formatted on zfs for using how backup.
This in my situationmarco@gentsar ~ $ zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT backup 928G 302G 626G - - 0% 32% 1.00x ONLINE - sda2 931G 302G 626G - - 0% 32.5% - ONLINE lpool 1.80T 292G 1.51T - - 0% 15% 1.00x ONLINE - nvme-MSI_M480_PRO_2TB_511240829130000340-part3 1.80T 292G 1.51T - - 0% 15.9% - ONLINE marco@gentsar ~ $
marco@gentsar ~ $ zfs list
NAME USED AVAIL REFER MOUNTPOINT
backup 302G 597G 384K /backup
backup/bsd 9.62G 597G 384K none
backup/bsd/freebsd-home 384K 597G 384K none
backup/bsd/freebsd-home-marco 503M 597G 503M none
backup/bsd/freebsd-root 9.13G 597G 9.13G none
backup/condivise 180G 597G 180G /backup/condivise
backup/linux 113G 597G 384K none
backup/linux/arch-home 23.9G 597G 23.9G none
backup/linux/arch-root 21.0G 597G 21.0G none
backup/linux/chimera-home 1.67G 597G 1.67G none
backup/linux/chimera-root 13.7G 597G 13.7G none
backup/linux/gentoo-root 31.6G 597G 31.6G none
backup/linux/void-home 9.43G 597G 9.43G none
backup/linux/void-root 11.4G 597G 11.4G none
lpool 293G 1.46T 96K none
lpool/condivise 178G 1.46T 178G legacy
lpool/home 46.2G 1.46T 96K none
lpool/home/arch 22.7G 1.46T 19.9G legacy
lpool/home/chimera 2.23G 1.46T 1.23G legacy
lpool/home/fedora 96K 1.46T 96K none
lpool/home/gentoo 11.9G 1.46T 10.3G legacy
lpool/home/void 9.39G 1.46T 5.37G legacy
lpool/root 68.7G 1.46T 96K none
lpool/root/arch 20.4G 1.46T 16.6G /
lpool/root/chimera 15.2G 1.46T 9.61G /
lpool/root/gentoo 24.5G 1.46T 20.6G /
lpool/root/void 8.54G 1.46T 7.98G /
for snapshot I’m using zrepl and follow its site I created this config:
marco@gentsar ~ $ cat /etc/zrepl/zrepl.yml
# This config serves as an example for a local zrepl installation that
# backups the entire zpool `system` to `backuppool/zrepl/sink`
#
# The requirements covered by this setup are described in the zrepl documentation's
# quick start section which inlines this example.
#
# CUSTOMIZATIONS YOU WILL LIKELY WANT TO APPLY:
# - adjust the name of the production pool `system` in the `filesystems` filter of jobs `snapjob` and `push_to_drive`
# - adjust the name of the backup pool `backuppool` in the `backuppool_sink` job
# - adjust the occurences of `myhostname` to the name of the system you are backing up (cannot be easily changed once you start replicating)
# - make sure the `zrepl_` prefix is not being used by any other zfs tools you might have installed (it likely isn't)
jobs:
# this job takes care of snapshot creation + pruning
- name: snapjob
type: snap
filesystems: {
"lpool/root/gentoo": true,
"lpool/home/gentoo": true,
}
# create snapshots with prefix `zrepl_` every 15 minutes
snapshotting:
type: periodic
interval: 15m
prefix: zrepl_
pruning:
keep:
# fade-out scheme for snapshots starting with `zrepl_`
# - keep all created in the last hour
# - then destroy snapshots such that we keep 24 each 1 hour apart
# - then destroy snapshots such that we keep 14 each 1 day apart
# - then destroy all older snapshots
- type: grid
grid: 1x1h(keep=all) | 24x1h | 14x1d
regex: "^zrepl_.*"
# keep all snapshots that don't have the `zrepl_` prefix
- type: regex
negate: true
regex: "^zrepl_.*"
# This job pushes to the local sink defined in job `backuppool_sink`.
# We trigger replication manually from the command line / udev rules using
# `zrepl signal wakeup push_to_drive`
- type: push
name: push_to_drive
connect:
type: local
listener_name: backup
client_identity: gentsar
filesystems: {
"lpool/root/gentoo": true,
"lpool/home/gentoo": true,
}
send:
encrypted: false
replication:
protection:
initial: guarantee_resumability
# Downgrade protection to guarantee_incremental which uses zfs bookmarks instead of zfs holds.
# Thus, when we yank out the backup drive during replication
# - we might not be able to resume the interrupted replication step because the partially received `to` snapshot of a `from`->`to` step may be pruned any time
# - but in exchange we get back the disk space allocated by `to` when we prune it
# - and because we still have the bookmarks created by `guarantee_incremental`, we can still do incremental replication of `from`->`to2` in the future
incremental: guarantee_incremental
snapshotting:
type: manual
pruning:
# no-op prune rule on sender (keep all snapshots), job `snapshot` takes care of this
keep_sender:
- type: regex
regex: ".*"
# retain
keep_receiver:
# longer retention on the backup drive, we have more space there
- type: grid
grid: 1x1h(keep=all) | 24x1h | 360x1d
regex: "^zrepl_.*"
# retain all non-zrepl snapshots on the backup drive
- type: regex
negate: true
regex: "^zrepl_.*"
# This job receives from job `push_to_drive` into `backuppool/zrepl/sink/myhostname`
- type: sink
name: backup
root_fs: "backup/linux"
serve:
type: local
listener_name: backup
This config create snapshots correctly, but I don’t understand when snapshts are moved on backup pool.
Someone use it?