ZFS send (syncoid) - Different size on target

Hi everyone,
I recently installed a Proxmox VE Server on a zfs mirror (encrypted) and I use syncoid to replicate the main dataset where I store the volumes of the VMs and of the containers.

Everything works fine but the target uses more space than source. Here some data:

SOURCE
root@pve:~# zfs get all dpool/DATA/ctvmvols | grep used
dpool/DATA/ctvmvols  used                  115G                      -
dpool/DATA/ctvmvols  usedbysnapshots       680K                      -
dpool/DATA/ctvmvols  usedbydataset         328K                      -
dpool/DATA/ctvmvols  usedbychildren        115G                      -
dpool/DATA/ctvmvols  usedbyrefreservation  0B                        -
dpool/DATA/ctvmvols  logicalused           116G                      -
TARGET
root@pve:~# zfs get all b1pool/DATA/dpool-data-ctvmvols-backups | grep used
b1pool/DATA/dpool-data-ctvmvols-backups  used                  130G                                      -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbysnapshots       3.09M                            -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbydataset         736K                                 -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbychildren        130G                                  -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbyrefreservation  0B                                    -
b1pool/DATA/dpool-data-ctvmvols-backups  logicalused           116G                                      -

The “logicalused” is the same for the source and the target. There is a little difference for the values “usedbysnapshots” and “usedbydataset” but “usedbychilden” is 15G bigger on the target (115 vs 130).

The number of the volumes in the datasets is the same and also the number of snapshots (498).

The “compressratio” for both the datasets is the same but multiple children have different “compressratio” values (don’t know why)

Maybe is important to note that I send the data encrypted and as “raw”. Here the cron command that is executed every hour:

/usr/sbin/syncoid --quiet --no-sync-snap --identifier=dpool-data-ctvmvols-backups --compress=none --sendoptions=Rw --recvoptions=u dpool/DATA/ctvmvols b1pool/DATA/dpool-data-ctvmvols-backups

Could you help me to understand, where this 15gb difference comes from?

Thank you
dt89

Maybe is also important to note that I don’t mount the dataset on the target and the target doesn’t have the key to decrypt the dataset.

zpool status dpool ; zpool status b1pool please.

Hi @mercenary_sysadmin, thanks for the reply. Here is the output:

root@pve:~# zpool status dpool ; zpool status b1pool
  pool: dpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:24 with 0 errors on Sun Jan 12 00:24:26 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        dpool                                     ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            6f58dfe3-22bb-4140-b23b-fb64855aece3  ONLINE       0     0     0
            a3a8170d-58a4-624c-8570-9a8446b4fba0  ONLINE       0     0     0

errors: No known data errors
  pool: b1pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:38 with 0 errors on Sun Jan 12 00:24:39 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        b1pool                                    ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            4d9e6bbc-8385-f340-89f6-ab2639f0729d  ONLINE       0     0     0
            cfdc3ef3-a881-f34e-a23e-824392a14f58  ONLINE       0     0     0

errors: No known data errors

Thank you. Next: zfs list -rt snap dpool | wc -l ; zfs list -rt snap b1pool | wc -l, please.

root@pve:~# zfs list -rt snap dpool | wc -l ; zfs list -rt snap b1pool | wc -l
506
618

but on the b1pool I save also some other data, here also the output of the dataset I’m taking about:

root@pve:~# zfs list -rt snap dpool/DATA/ctvmvols | wc -l ; zfs list -rt snap b1pool/DATA/dpool-data-ctvmvols-backups | wc -l
506
506

thank you

1 Like

Interesting. How about ashift on each vdev involved?

Edit: hang on, this is proxmox… Which means ZVOLs. I suspect differences in refreservation settings, which are generally inherited from the parent on target rather than being set to match the source.

Check that on the specific child datasets where you see higher used on target than source, please!

here is the ashift data of the pools:

root@pve:~# zpool get ashift
NAME    PROPERTY  VALUE   SOURCE
b1pool  ashift    12      local
dpool   ashift    12      local

here I list all “refreservation”. I exclude “refreservation -” and “refreservation none”. It remains not so much and there is no difference:

root@pve:~# zfs get refreservation | grep -vP 'refreservation\s+-' | grep -v none
NAME                                                                                                                             PROPERTY        VALUE      SOURCE
b1pool/DATA/dpool-data-ctvmvols-backups/vm-100-disk-0                                                                            refreservation  125M       received
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0                                                                            refreservation  3M         received
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-1                                                                            refreservation  16.3G      received

dpool/DATA/ctvmvols/vm-100-disk-0                                                                                                refreservation  125M       local
dpool/DATA/ctvmvols/vm-101-disk-0                                                                                                refreservation  3M         local
dpool/DATA/ctvmvols/vm-101-disk-1                                                                                                refreservation  16.3G      local

Two of the datasets where there is a different compressratio are:

b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0                                                                            compressratio  1.59x  -

dpool/DATA/ctvmvols/vm-101-disk-0                                                                                                compressratio  2.41x  -

At this point I think is useful to list all the settings of these two datasets:

root@pve:~# zfs get all b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0
NAME                                                   PROPERTY              VALUE                                    SOURCE
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  type                  volume                                   -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  creation              Mon Jan 20 15:38 2025                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  used                  6.31M                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  available             762G                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  referenced            816K                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  compressratio         1.59x                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  reservation           none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  volsize               1M                                       local
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  volblocksize          16K                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  checksum              on                                       default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  compression           on                                       default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  readonly              off                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  createtxg             610646                                   -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  copies                1                                        default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  refreservation        3M                                       received
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  guid                  2325873904905528961                      -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  primarycache          all                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  secondarycache        all                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbysnapshots       2.52M                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbydataset         816K                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbychildren        0B                                       -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbyrefreservation  3M                                       -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logbias               latency                                  default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  objsetid              33112                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  dedup                 off                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  mlslabel              none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  sync                  standard                                 default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  refcompressratio      2.55x                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  written               0                                        -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logicalused           1.31M                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logicalreferenced     644K                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  volmode               default                                  default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  snapshot_limit        none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  snapshot_count        none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  snapdev               hidden                                   default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  context               none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  fscontext             none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  defcontext            none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  rootcontext           none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  redundant_metadata    all                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  encryption            aes-256-gcm                              -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  keylocation           none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  pbkdf2iters           0                                        default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  encryptionroot        b1pool/DATA/dpool-data-ctvmvols-backups  -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  keystatus             unavailable                              -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  prefetch              all                                      default
root@pve:~# zfs get all dpool/DATA/ctvmvols/vm-101-disk-0
NAME                               PROPERTY              VALUE                     SOURCE
dpool/DATA/ctvmvols/vm-101-disk-0  type                  volume                    -
dpool/DATA/ctvmvols/vm-101-disk-0  creation              Tue Dec 24  8:54 2024     -
dpool/DATA/ctvmvols/vm-101-disk-0  used                  3.53M                     -
dpool/DATA/ctvmvols/vm-101-disk-0  available             783G                      -
dpool/DATA/ctvmvols/vm-101-disk-0  referenced            232K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  compressratio         2.41x                     -
dpool/DATA/ctvmvols/vm-101-disk-0  reservation           none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  volsize               1M                        local
dpool/DATA/ctvmvols/vm-101-disk-0  volblocksize          16K                       default
dpool/DATA/ctvmvols/vm-101-disk-0  checksum              on                        default
dpool/DATA/ctvmvols/vm-101-disk-0  compression           on                        default
dpool/DATA/ctvmvols/vm-101-disk-0  readonly              off                       default
dpool/DATA/ctvmvols/vm-101-disk-0  createtxg             108140                    -
dpool/DATA/ctvmvols/vm-101-disk-0  copies                1                         default
dpool/DATA/ctvmvols/vm-101-disk-0  refreservation        3M                        local
dpool/DATA/ctvmvols/vm-101-disk-0  guid                  2916268870370452825       -
dpool/DATA/ctvmvols/vm-101-disk-0  primarycache          all                       default
dpool/DATA/ctvmvols/vm-101-disk-0  secondarycache        all                       default
dpool/DATA/ctvmvols/vm-101-disk-0  usedbysnapshots       308K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  usedbydataset         232K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  usedbychildren        0B                        -
dpool/DATA/ctvmvols/vm-101-disk-0  usedbyrefreservation  3M                        -
dpool/DATA/ctvmvols/vm-101-disk-0  logbias               latency                   default
dpool/DATA/ctvmvols/vm-101-disk-0  objsetid              10679                     -
dpool/DATA/ctvmvols/vm-101-disk-0  dedup                 off                       default
dpool/DATA/ctvmvols/vm-101-disk-0  mlslabel              none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  sync                  standard                  default
dpool/DATA/ctvmvols/vm-101-disk-0  refcompressratio      3.17x                     -
dpool/DATA/ctvmvols/vm-101-disk-0  written               0                         -
dpool/DATA/ctvmvols/vm-101-disk-0  logicalused           852K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  logicalreferenced     572K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  volmode               default                   default
dpool/DATA/ctvmvols/vm-101-disk-0  snapshot_limit        none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  snapshot_count        none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  snapdev               hidden                    default
dpool/DATA/ctvmvols/vm-101-disk-0  context               none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  fscontext             none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  defcontext            none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  rootcontext           none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  redundant_metadata    all                       default
dpool/DATA/ctvmvols/vm-101-disk-0  encryption            aes-256-gcm               -
dpool/DATA/ctvmvols/vm-101-disk-0  keylocation           none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  pbkdf2iters           0                         default
dpool/DATA/ctvmvols/vm-101-disk-0  encryptionroot        dpool/DATA                -
dpool/DATA/ctvmvols/vm-101-disk-0  keystatus             available                 -
dpool/DATA/ctvmvols/vm-101-disk-0  prefetch              all                       default

I didn’t notice anything strange. Sorry for the amount of data and thank you.

Edit: With “anything strange” I meant “anything strange” other than the space used

I found this post and I checked the sector physical/logical size of the involved disks.

The two disks of dpool have:

Sector size (logical/physical): 512 bytes / 2048 bytes

one disk of b1pool has the same value but one disk has:

Sector size (logical/physical): 512 bytes / 512 bytes

Could that be the reason of the different used space on target?

I don’t know what exactly is going on here, but these two datasets do not share a state. If they did, their logicalused and logicalreferenced would be identical, regardless of the “real” values on-disk.

Let’s see if we can figure out what’s going on with that, focusing on this particular volume. List the GUIDs of all snapshots of that volume on both source and target, like so:

root@box:# zfs get guid -t snap -r dpool/DATA/ctvmvols/vm-101-disk-0 ; zfs get guid -t snap -r b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0`

If the volumes share state, you’ll have an exact match of GUIDs all the way down, like this:

root@elden:/# syncoid testpool/source testpool/target
INFO: Sending oldest full snapshot testpool/source@testing to new target filesystem testpool/target (~ 12 KB):
44.2KiB 0:00:00 [9.48MiB/s] [=================================] 350%            
INFO: Sending incremental testpool/source@testing ... syncoid_elden_2025-01-27:16:14:22-GMT-05:00 to testpool/target (~ 4 KB):
3.96KiB 0:00:00 [ 311KiB/s] [===============================> ]  99%            
root@elden:/# zfs get guid -r -t snap testpool/source
NAME                                                         PROPERTY  VALUE                 SOURCE
testpool/source@testing                                      guid      7134551861649807037   -
testpool/source@excludeme                                    guid      13576063299605683996  -
testpool/source@zfs-auto-snap_frequent-2024-12-16-2030       guid      1765660501040374094   -
testpool/source@final                                        guid      12127971319727283526  -
testpool/source@syncoid_elden_2025-01-27:16:14:22-GMT-05:00  guid      2328121616681522776   -
root@elden:/# zfs get guid -r -t snap testpool/target
NAME                                                         PROPERTY  VALUE                 SOURCE
testpool/target@testing                                      guid      7134551861649807037   -
testpool/target@excludeme                                    guid      13576063299605683996  -
testpool/target@zfs-auto-snap_frequent-2024-12-16-2030       guid      1765660501040374094   -
testpool/target@final                                        guid      12127971319727283526  -
testpool/target@syncoid_elden_2025-01-27:16:14:22-GMT-05:00  guid      2328121616681522776   -

Please and thank you. =)

Thank you for the reply. Here is the data:

root@pve:/tmp# cat dpool.txt 
vm-101-disk-0@autosnap_2024-12-30_00:00:18_daily                                    guid      7836276286184933442   -
vm-101-disk-0@autosnap_2024-12-31_00:00:18_daily                                    guid      752900052279089804    -
vm-101-disk-0@autosnap_2025-01-01_00:00:18_monthly                                  guid      16542805295998924966  -
vm-101-disk-0@autosnap_2025-01-01_00:00:18_daily                                    guid      10462650242074416125  -
vm-101-disk-0@autosnap_2025-01-02_00:00:18_daily                                    guid      4302405321952397659   -
vm-101-disk-0@autosnap_2025-01-03_00:00:43_daily                                    guid      2210197864039876893   -
vm-101-disk-0@autosnap_2025-01-04_00:00:43_daily                                    guid      8037501967458255620   -
vm-101-disk-0@autosnap_2025-01-05_00:00:43_daily                                    guid      14861850050250201213  -
vm-101-disk-0@autosnap_2025-01-06_00:00:43_daily                                    guid      18168769528542437296  -
vm-101-disk-0@syncoid_dpool-data-ctvmvols-backups_pve_2025-01-06:21:50:03-GMT01:00  guid      11836351968112792617  -
vm-101-disk-0@autosnap_2025-01-07_00:00:43_daily                                    guid      3170295667296628326   -
vm-101-disk-0@autosnap_2025-01-08_00:00:43_daily                                    guid      10058751247334555601  -
vm-101-disk-0@autosnap_2025-01-09_00:00:43_daily                                    guid      17399204786835698086  -
vm-101-disk-0@autosnap_2025-01-10_00:00:43_daily                                    guid      3226388180167056144   -
vm-101-disk-0@autosnap_2025-01-11_00:00:43_daily                                    guid      16256724092744897456  -
vm-101-disk-0@autosnap_2025-01-12_00:00:43_daily                                    guid      13096518070016822096  -
vm-101-disk-0@autosnap_2025-01-13_00:00:43_daily                                    guid      4680657800116752292   -
vm-101-disk-0@autosnap_2025-01-14_00:00:43_daily                                    guid      123849993377524107    -
vm-101-disk-0@autosnap_2025-01-15_00:00:43_daily                                    guid      2201564376192396841   -
vm-101-disk-0@autosnap_2025-01-16_00:00:43_daily                                    guid      16452897114239755206  -
vm-101-disk-0@autosnap_2025-01-17_00:00:11_daily                                    guid      10424079775072925582  -
vm-101-disk-0@autosnap_2025-01-18_00:00:11_daily                                    guid      1427605906046213338   -
vm-101-disk-0@autosnap_2025-01-19_00:00:11_daily                                    guid      11659530926163197304  -
vm-101-disk-0@autosnap_2025-01-20_00:00:11_daily                                    guid      6490213883981984684   -
vm-101-disk-0@autosnap_2025-01-21_00:00:11_daily                                    guid      14047740210515375910  -
vm-101-disk-0@autosnap_2025-01-22_00:00:11_daily                                    guid      18198980659663793017  -
vm-101-disk-0@autosnap_2025-01-23_00:00:11_daily                                    guid      3188548279853308153   -
vm-101-disk-0@autosnap_2025-01-24_00:00:11_daily                                    guid      2847024878259353088   -
vm-101-disk-0@autosnap_2025-01-25_00:00:11_daily                                    guid      13042361190269267722  -
vm-101-disk-0@autosnap_2025-01-26_00:00:11_daily                                    guid      17475641257687260586  -
vm-101-disk-0@autosnap_2025-01-27_00:00:11_daily                                    guid      8582886626220981277   -
vm-101-disk-0@autosnap_2025-01-27_14:00:11_hourly                                   guid      4603321468709297924   -
vm-101-disk-0@autosnap_2025-01-27_15:00:11_hourly                                   guid      7140166157304598328   -
vm-101-disk-0@autosnap_2025-01-27_16:00:11_hourly                                   guid      3061578781692307103   -
vm-101-disk-0@autosnap_2025-01-27_17:00:11_hourly                                   guid      5479156137072667586   -
vm-101-disk-0@autosnap_2025-01-27_18:00:40_hourly                                   guid      8545395444886304571   -
vm-101-disk-0@autosnap_2025-01-27_19:00:40_hourly                                   guid      9654493492120797524   -
vm-101-disk-0@autosnap_2025-01-27_20:00:40_hourly                                   guid      2113340823411774472   -
vm-101-disk-0@autosnap_2025-01-27_21:00:40_hourly                                   guid      10509568397353438127  -
vm-101-disk-0@autosnap_2025-01-27_22:00:40_hourly                                   guid      10617062356590194170  -
vm-101-disk-0@autosnap_2025-01-27_23:00:40_hourly                                   guid      16146985201201668218  -
vm-101-disk-0@autosnap_2025-01-28_00:00:40_daily                                    guid      3606148056359346785   -
vm-101-disk-0@autosnap_2025-01-28_00:00:40_hourly                                   guid      1440384117139566674   -
vm-101-disk-0@autosnap_2025-01-28_01:00:40_hourly                                   guid      246260317005619277    -
vm-101-disk-0@autosnap_2025-01-28_02:00:40_hourly                                   guid      6310252580912745593   -
vm-101-disk-0@autosnap_2025-01-28_03:00:40_hourly                                   guid      14327276611387959749  -
vm-101-disk-0@autosnap_2025-01-28_04:00:40_hourly                                   guid      9411779362569036393   -
vm-101-disk-0@autosnap_2025-01-28_05:00:40_hourly                                   guid      5860334223435108978   -
vm-101-disk-0@autosnap_2025-01-28_06:00:40_hourly                                   guid      6365765976185870871   -
vm-101-disk-0@autosnap_2025-01-28_07:00:40_hourly                                   guid      16087031199911073659  -
vm-101-disk-0@autosnap_2025-01-28_08:00:40_hourly                                   guid      16878586201611599628  -
vm-101-disk-0@autosnap_2025-01-28_09:00:40_hourly                                   guid      8348029588885603014   -
vm-101-disk-0@autosnap_2025-01-28_10:00:40_hourly                                   guid      9969419299573849636   -
vm-101-disk-0@autosnap_2025-01-28_11:00:40_hourly                                   guid      5931010537736361368   -
vm-101-disk-0@autosnap_2025-01-28_12:00:40_hourly                                   guid      389381074982917015    -
vm-101-disk-0@autosnap_2025-01-28_13:00:40_hourly                                   guid      7739852056731484855   -
root@pve:/tmp# cat b1pool.txt 
vm-101-disk-0@autosnap_2024-12-30_00:00:18_daily                                    guid      7836276286184933442   -
vm-101-disk-0@autosnap_2024-12-31_00:00:18_daily                                    guid      752900052279089804    -
vm-101-disk-0@autosnap_2025-01-01_00:00:18_monthly                                  guid      16542805295998924966  -
vm-101-disk-0@autosnap_2025-01-01_00:00:18_daily                                    guid      10462650242074416125  -
vm-101-disk-0@autosnap_2025-01-02_00:00:18_daily                                    guid      4302405321952397659   -
vm-101-disk-0@autosnap_2025-01-03_00:00:43_daily                                    guid      2210197864039876893   -
vm-101-disk-0@autosnap_2025-01-04_00:00:43_daily                                    guid      8037501967458255620   -
vm-101-disk-0@autosnap_2025-01-05_00:00:43_daily                                    guid      14861850050250201213  -
vm-101-disk-0@autosnap_2025-01-06_00:00:43_daily                                    guid      18168769528542437296  -
vm-101-disk-0@syncoid_dpool-data-ctvmvols-backups_pve_2025-01-06:21:50:03-GMT01:00  guid      11836351968112792617  -
vm-101-disk-0@autosnap_2025-01-07_00:00:43_daily                                    guid      3170295667296628326   -
vm-101-disk-0@autosnap_2025-01-08_00:00:43_daily                                    guid      10058751247334555601  -
vm-101-disk-0@autosnap_2025-01-09_00:00:43_daily                                    guid      17399204786835698086  -
vm-101-disk-0@autosnap_2025-01-10_00:00:43_daily                                    guid      3226388180167056144   -
vm-101-disk-0@autosnap_2025-01-11_00:00:43_daily                                    guid      16256724092744897456  -
vm-101-disk-0@autosnap_2025-01-12_00:00:43_daily                                    guid      13096518070016822096  -
vm-101-disk-0@autosnap_2025-01-13_00:00:43_daily                                    guid      4680657800116752292   -
vm-101-disk-0@autosnap_2025-01-14_00:00:43_daily                                    guid      123849993377524107    -
vm-101-disk-0@autosnap_2025-01-15_00:00:43_daily                                    guid      2201564376192396841   -
vm-101-disk-0@autosnap_2025-01-16_00:00:43_daily                                    guid      16452897114239755206  -
vm-101-disk-0@autosnap_2025-01-17_00:00:11_daily                                    guid      10424079775072925582  -
vm-101-disk-0@autosnap_2025-01-18_00:00:11_daily                                    guid      1427605906046213338   -
vm-101-disk-0@autosnap_2025-01-19_00:00:11_daily                                    guid      11659530926163197304  -
vm-101-disk-0@autosnap_2025-01-20_00:00:11_daily                                    guid      6490213883981984684   -
vm-101-disk-0@autosnap_2025-01-21_00:00:11_daily                                    guid      14047740210515375910  -
vm-101-disk-0@autosnap_2025-01-22_00:00:11_daily                                    guid      18198980659663793017  -
vm-101-disk-0@autosnap_2025-01-23_00:00:11_daily                                    guid      3188548279853308153   -
vm-101-disk-0@autosnap_2025-01-24_00:00:11_daily                                    guid      2847024878259353088   -
vm-101-disk-0@autosnap_2025-01-25_00:00:11_daily                                    guid      13042361190269267722  -
vm-101-disk-0@autosnap_2025-01-26_00:00:11_daily                                    guid      17475641257687260586  -
vm-101-disk-0@autosnap_2025-01-27_00:00:11_daily                                    guid      8582886626220981277   -
vm-101-disk-0@autosnap_2025-01-27_14:00:11_hourly                                   guid      4603321468709297924   -
vm-101-disk-0@autosnap_2025-01-27_15:00:11_hourly                                   guid      7140166157304598328   -
vm-101-disk-0@autosnap_2025-01-27_16:00:11_hourly                                   guid      3061578781692307103   -
vm-101-disk-0@autosnap_2025-01-27_17:00:11_hourly                                   guid      5479156137072667586   -
vm-101-disk-0@autosnap_2025-01-27_18:00:40_hourly                                   guid      8545395444886304571   -
vm-101-disk-0@autosnap_2025-01-27_19:00:40_hourly                                   guid      9654493492120797524   -
vm-101-disk-0@autosnap_2025-01-27_20:00:40_hourly                                   guid      2113340823411774472   -
vm-101-disk-0@autosnap_2025-01-27_21:00:40_hourly                                   guid      10509568397353438127  -
vm-101-disk-0@autosnap_2025-01-27_22:00:40_hourly                                   guid      10617062356590194170  -
vm-101-disk-0@autosnap_2025-01-27_23:00:40_hourly                                   guid      16146985201201668218  -
vm-101-disk-0@autosnap_2025-01-28_00:00:40_daily                                    guid      3606148056359346785   -
vm-101-disk-0@autosnap_2025-01-28_00:00:40_hourly                                   guid      1440384117139566674   -
vm-101-disk-0@autosnap_2025-01-28_01:00:40_hourly                                   guid      246260317005619277    -
vm-101-disk-0@autosnap_2025-01-28_02:00:40_hourly                                   guid      6310252580912745593   -
vm-101-disk-0@autosnap_2025-01-28_03:00:40_hourly                                   guid      14327276611387959749  -
vm-101-disk-0@autosnap_2025-01-28_04:00:40_hourly                                   guid      9411779362569036393   -
vm-101-disk-0@autosnap_2025-01-28_05:00:40_hourly                                   guid      5860334223435108978   -
vm-101-disk-0@autosnap_2025-01-28_06:00:40_hourly                                   guid      6365765976185870871   -
vm-101-disk-0@autosnap_2025-01-28_07:00:40_hourly                                   guid      16087031199911073659  -
vm-101-disk-0@autosnap_2025-01-28_08:00:40_hourly                                   guid      16878586201611599628  -
vm-101-disk-0@autosnap_2025-01-28_09:00:40_hourly                                   guid      8348029588885603014   -
vm-101-disk-0@autosnap_2025-01-28_10:00:40_hourly                                   guid      9969419299573849636   -
vm-101-disk-0@autosnap_2025-01-28_11:00:40_hourly                                   guid      5931010537736361368   -
vm-101-disk-0@autosnap_2025-01-28_12:00:40_hourly                                   guid      389381074982917015    -
vm-101-disk-0@autosnap_2025-01-28_13:00:40_hourly                                   guid      7739852056731484855   -

As you can see, I cut the first part of the names to better compare the results with vimdiff and I found no differences but I also compared the datasets again and the guid is different. I’m confused:

root@pve:/tmp# zfs get all dpool/DATA/ctvmvols/vm-101-disk-0
NAME                               PROPERTY              VALUE                     SOURCE
dpool/DATA/ctvmvols/vm-101-disk-0  type                  volume                    -
dpool/DATA/ctvmvols/vm-101-disk-0  creation              Tue Dec 24  8:54 2024     -
dpool/DATA/ctvmvols/vm-101-disk-0  used                  3.72M                     -
dpool/DATA/ctvmvols/vm-101-disk-0  available             779G                      -
dpool/DATA/ctvmvols/vm-101-disk-0  referenced            232K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  compressratio         2.21x                     -
dpool/DATA/ctvmvols/vm-101-disk-0  reservation           none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  volsize               1M                        local
dpool/DATA/ctvmvols/vm-101-disk-0  volblocksize          16K                       default
dpool/DATA/ctvmvols/vm-101-disk-0  checksum              on                        default
dpool/DATA/ctvmvols/vm-101-disk-0  compression           on                        default
dpool/DATA/ctvmvols/vm-101-disk-0  readonly              off                       default
dpool/DATA/ctvmvols/vm-101-disk-0  createtxg             108140                    -
dpool/DATA/ctvmvols/vm-101-disk-0  copies                1                         default
dpool/DATA/ctvmvols/vm-101-disk-0  refreservation        3M                        local
dpool/DATA/ctvmvols/vm-101-disk-0  guid                  2916268870370452825       -
dpool/DATA/ctvmvols/vm-101-disk-0  primarycache          all                       default
dpool/DATA/ctvmvols/vm-101-disk-0  secondarycache        all                       default
dpool/DATA/ctvmvols/vm-101-disk-0  usedbysnapshots       504K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  usedbydataset         232K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  usedbychildren        0B                        -
dpool/DATA/ctvmvols/vm-101-disk-0  usedbyrefreservation  3M                        -
dpool/DATA/ctvmvols/vm-101-disk-0  logbias               latency                   default
dpool/DATA/ctvmvols/vm-101-disk-0  objsetid              10679                     -
dpool/DATA/ctvmvols/vm-101-disk-0  dedup                 off                       default
dpool/DATA/ctvmvols/vm-101-disk-0  mlslabel              none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  sync                  standard                  default
dpool/DATA/ctvmvols/vm-101-disk-0  refcompressratio      3.17x                     -
dpool/DATA/ctvmvols/vm-101-disk-0  written               0                         -
dpool/DATA/ctvmvols/vm-101-disk-0  logicalused           1020K                     -
dpool/DATA/ctvmvols/vm-101-disk-0  logicalreferenced     572K                      -
dpool/DATA/ctvmvols/vm-101-disk-0  volmode               default                   default
dpool/DATA/ctvmvols/vm-101-disk-0  snapshot_limit        none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  snapshot_count        none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  snapdev               hidden                    default
dpool/DATA/ctvmvols/vm-101-disk-0  context               none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  fscontext             none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  defcontext            none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  rootcontext           none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  redundant_metadata    all                       default
dpool/DATA/ctvmvols/vm-101-disk-0  encryption            aes-256-gcm               -
dpool/DATA/ctvmvols/vm-101-disk-0  keylocation           none                      default
dpool/DATA/ctvmvols/vm-101-disk-0  keyformat             hex                       -
dpool/DATA/ctvmvols/vm-101-disk-0  pbkdf2iters           0                         default
dpool/DATA/ctvmvols/vm-101-disk-0  encryptionroot        dpool/DATA                -
dpool/DATA/ctvmvols/vm-101-disk-0  keystatus             available                 -
dpool/DATA/ctvmvols/vm-101-disk-0  snapshots_changed     Tue Jan 28 14:00:41 2025  -
dpool/DATA/ctvmvols/vm-101-disk-0  prefetch              all                       default
root@pve:/tmp# zfs get all b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0
NAME                                                   PROPERTY              VALUE                                    SOURCE
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  type                  volume                                   -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  creation              Mon Jan 20 15:38 2025                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  used                  6.83M                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  available             755G                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  referenced            816K                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  compressratio         1.51x                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  reservation           none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  volsize               1M                                       local
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  volblocksize          16K                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  checksum              on                                       default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  compression           on                                       default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  readonly              off                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  createtxg             610646                                   -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  copies                1                                        default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  refreservation        3M                                       received
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  guid                  2325873904905528961                      -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  primarycache          all                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  secondarycache        all                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbysnapshots       3.03M                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbydataset         816K                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbychildren        0B                                       -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  usedbyrefreservation  3M                                       -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logbias               latency                                  default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  objsetid              33112                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  dedup                 off                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  mlslabel              none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  sync                  standard                                 default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  refcompressratio      2.55x                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  written               0                                        -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logicalused           1.61M                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logicalreferenced     644K                                     -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  volmode               default                                  default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  snapshot_limit        none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  snapshot_count        none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  snapdev               hidden                                   default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  context               none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  fscontext             none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  defcontext            none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  rootcontext           none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  redundant_metadata    all                                      default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  encryption            aes-256-gcm                              -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  keylocation           none                                     default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  keyformat             hex                                      -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  pbkdf2iters           0                                        default
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  encryptionroot        b1pool/DATA/dpool-data-ctvmvols-backups  -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  keystatus             unavailable                              -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  snapshots_changed     Tue Jan 28 14:15:03 2025                 -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  prefetch              all                                      default

Maybe because the VMs write data (ex. logs) and the datasets are never the same? Although not a difference of about 25G (now).

edit: the whole difference is 25G but the difference between source and target of vm-101-disk-0 is about 3M (3.72M vs 6.83M)

To better understand what is going on, I have now destroyed the target pool and I copied the data again. The difference of “used space” is still there:

zfs destroy -r b1pool/DATA/dpool-data-ctvmvols-backups
zfs send -R -w dpool/DATA/ctvmvols@autosnap_2025-01-20_14:00:11_hourly | zfs receive -u -s -F b1pool/DATA/dpool-data-ctvmvols-backups
/usr/sbin/syncoid --no-sync-snap --identifier=dpool-data-ctvmvols-backups --compress=none --sendoptions=Rw --recvoptions=u dpool/DATA/ctvmvols b1pool/DATA/dpool-data-ctvmvols-backups

That’s quite a divergence of data, but maybe for serious amounts of data it’s no big deal. Usually these variations are from ashift differences or block size differences.

Hmmm. I don’t know. Is the logical_used still different?

Summoning @allan ; this is an interesting one.

yes

root@pve:~# zfs get all dpool/DATA/ctvmvols/vm-101-disk-0 | grep logical
dpool/DATA/ctvmvols/vm-101-disk-0  logicalused           1020K                     -
dpool/DATA/ctvmvols/vm-101-disk-0  logicalreferenced     572K                      -
root@pve:~# zfs get all b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0 | grep logical
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logicalused           1.61M                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-101-disk-0  logicalreferenced     644K                                     -

Edit: the logical_used for the synced dataset is the same:

root@pve:~# zfs get all dpool/DATA/ctvmvols | grep used
dpool/DATA/ctvmvols  used                  119G                      -
dpool/DATA/ctvmvols  usedbysnapshots       552K                      -
dpool/DATA/ctvmvols  usedbydataset         336K                      -
dpool/DATA/ctvmvols  usedbychildren        119G                      -
dpool/DATA/ctvmvols  usedbyrefreservation  0B                        -
dpool/DATA/ctvmvols  logicalused           123G                      -
root@pve:~# zfs get all b1pool/DATA/dpool-data-ctvmvols-backups | grep used
b1pool/DATA/dpool-data-ctvmvols-backups  used                  137G                                      -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbysnapshots       2.81M                                     -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbydataset         768K                                      -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbychildren        137G                                      -
b1pool/DATA/dpool-data-ctvmvols-backups  usedbyrefreservation  0B                                        -
b1pool/DATA/dpool-data-ctvmvols-backups  logicalused           123G                                      -

Edit: and for completeness, here another VM:

root@pve:~# zfs get all dpool/DATA/ctvmvols/vm-2401-disk-0 | grep used
dpool/DATA/ctvmvols/vm-2401-disk-0  used                  87.3G                     -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbysnapshots       5.42G                     -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbydataset         81.9G                     -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbychildren        0B                        -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbyrefreservation  0B                        -
dpool/DATA/ctvmvols/vm-2401-disk-0  logicalused           97.2G                     -
root@pve:~# zfs get all b1pool/DATA/dpool-data-ctvmvols-backups/vm-2401-disk-0 | grep used
b1pool/DATA/dpool-data-ctvmvols-backups/vm-2401-disk-0  used                  97.8G                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-2401-disk-0  usedbysnapshots       9.02G                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-2401-disk-0  usedbydataset         88.8G                                    -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-2401-disk-0  usedbychildren        0B                                       -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-2401-disk-0  usedbyrefreservation  0B                                       -
b1pool/DATA/dpool-data-ctvmvols-backups/vm-2401-disk-0  logicalused           97.2G                                    -

The empty dataset that’s the parent of those volumes doesn’t matter one way or another, only the volumes themselves.

I asked Allan Jude elsewhere about this, and his thought was perhaps, if the target volumes had never been mounted, that there were items still in the queue on the target side which had been completed on the source side, and that could cause a change in the reported free space at each location. I’m not entirely sure that makes sense, here, given that these are ZVOLs not filesystem datasets, but at this point I’m at a loss.

(With that said, I don’t use ZVOLs in production in the first place, which makes it entirely possible that there are zvol-specific factors in play that I’m unfamiliar with.)

@mercenary_sysadmin Thank you very much for your help and thanks to @allan
I have some questions to close the argument. I’m assuming you also use Proxmox and as you said you don’t use ZVOLs.

  1. Do you use directories (zfs datasets used as directories) as Proxmox Storage instead of zfs storage?
  2. If so, you can’t make Snapshots in Proxmox but you still can use sanoid and syncoid to create snapshots and send them to another pool. Right?
  3. Do you use raw files? I should be able to convert ZVOLS to raw files (at the moment I’ve only found this post). Right?
  4. Using the above setup, I then would be able to send encrypted data to a target, that can’t decrypt the data but it will maintain the same used space of the source. Right?

All these questions because I will change my setup, since I’d like to use something that I completely understand :slight_smile:

If I made wrong assumptions, could you please describe how would you do the setup?

Sorry friend, I am not a proxmox user. I do a lot of virtualization on top of OpenZFS storage using the KVM hypervisor (the basic tools beneath proxmox) but I use them on vanilla Ubuntu.

Proxmox and I are both rather opinionated, and our opinions too frequently clash for me to be comfortable with it. :cowboy_hat_face:

1 Like

Last question and I’m done :slight_smile:
Do you use in addition to CLI a GUI too? virt-manager?

1 Like

Absolutely! I install bare Ubuntu, OpenZFS, all the KVM/libvirt framework, and virt-manager. Then it’s like this:

zfs create data/images -o recordsize=64K
zfs create data/images/myfirstvm

Now there’s a decision: qcow2 for full support of QEMU features like migration and hibernation, or raw for highest performance? It’s not the same decision every time, so one or the other of the following:

qemu-img create -f qcow2 /data/images/myfirstvm/myfirstvm.qcow2 100G

or

truncate -s 100G /data/images/myfirstvm/myfirstvm.raw

From here, I’m ready to fire up virt-manager to create the VM’s hardware definitions, and pull a graphical console to do the OS installation on it.

I do typically install a full desktop interface on my VM hosts, and virt-manager along with it, but that’s basically for emergencies only–my normal routine is connecting virt-manager running on my workstation to the server in question across virt-manager’s built in support for using SSH tunnels.

Inside a production environment, the SSH is generally all that’s needed. When controlling a host from OUTSIDE its local environment, I typically use WireGuard to get access to the local network of the host, then SSH, rather than exposing the host’s SSH daemon directly to the Internet.

2 Likes