Ntfs default cluster size of 4K is for volumes smaller than 16TB, which may or may not apply.
But more importantly, 4KiB clusters eat a LOT of IOPS. You rarely want this in a VM, for the same reason you rarely (read: damn near never) want 4KiB clusters / blocks / volblocks for a VM using 4KiB native sectors: very few workloads actually fragment data that heavily, not even databases.
Also–and again, much like ext4–data under ntfs is stored primarily in extents, not just clusters. Extents are a range of contiguous clusters which are read or written in a single IOP. These tend to average closer to 64KiB.
Even Microsoft SQL server typically defaults to 64KiB extents: Pages and Extents Architecture Guide - SQL Server | Microsoft Learn
This means that you generally want to match your block size, or volblocksize, to roughly match the typical extent size, not cluster size. So, 64KiB.
This does mean that you’ll get a bit of read and write amplification on the occasional very small extent–or on EXTREMELY fragmented ntfs filesystems–which will in turn decrease performance in those cases, and which also tells you that you still shouldn’t run virtualized filesystems extremely full, even if the host storage has plenty of room–because if you do, the guest will be forced to allocste storage that normally would be in large extents in fragmented individual clusters!
You don’t always get the absolute best performance out of an exact match between guest level extent (or other IOP) size and host level blocksize. But that’s usually a very good starting point, and I typically wouldn’t even recommend bothering trying anything smaller than half the typical extent or IOP size, or larger than double.
Half the typical IOP size will prioritize latency at the expense of IOPS and throughput. Double the typical IOP size will prioritize throughput and IOPS efficiency at the expense of small operation latency. Pick your poison, and, hopefully… Employ a royal taster, before committing to a great big gulp in production.