iSCSI and NVME over TCP Target Volblocksizes for VM Storage: Sanity of Proxmox Default 128K?

A bit of a sequel to this: Proxmox's VolBlockSize Default is 16K for QEMU VM Disks. Why?

There’s been lots of development on the Proxmox storage plugin side of things to more robustly support iSCSI and NVME over TCP for block storage targets for VMs.

The current iterations of those plugins use the Proxmox recommended volblocksize of 128k for storing zVols meant to be used with iSCSI and NVME over TCP.

In the previous thread, at Proxmox's VolBlockSize Default is 16K for QEMU VM Disks. Why? - #6 by mercenary_sysadmin , @mercenary_sysadmin suggested 32K or 64K as a good starting place for volblocksize for VM storage.

Is there something about using iSCSI or NVME over TCP that makes 128K a better choice, or is Proxmox just choosing … unique … defaults again (the default volblocksize for new local storage is still 16K)?

pretty sure they just took the bit between their teeth again, tbh.

Okay, cool. That’s what I thought, but I wanted to ask.

I’m glad to know that iSCSI and NVME over TCP aren’t more complicated than I thought they already were. :stuck_out_tongue:

(Did you get a new avatar? Great photo. :slight_smile: )