Using ZFS over SSHFS for second backup instead of zfs send?

Here’s the situation:

I have a server running ZFS with regular snapshots. Backups are stored on an external NAS-like device from the provider that only accepts SSH, NFS and a few other protocols. Even though the underlying filesystem there is also ZFS, customers don’t get access to it.

I’m already running regular backups to that NAS using borgbackup, but it would be nice if I could restore ZFS snapshots directly if needed.

So I could create some scripts that would zfs send the stream into seperate files, and use some shell magic to create incremental streams and such. The problem with this approach is that I can’t really “prune” backups like I do with borg and sanoid. I can work around that by manually creating new full zfs send streams as necessary and remove the incremental ones; that’s all doable, but also sounds like a lot of complexity.

Since all this is intended more as a convenience (the real backup is via borg after all), I was thinking of just using SSHFS instead to mount a directory, create a sparse file with enough capacity, create a loopback to that and create a zfs pool on that and then use stuff like syncoid.

The advantage IMO is that it’s more likely to notice bitrot, and of course the ability to prune snapshots. The disadvantage is that it’s a lot of abstraction layers between the disk and ZFS, and I’m not sure if ZFS will work semi reliably under these conditions. I’m especially worried that should there be an issue with the SSHFS pool, ZFS might hang and also bring the main pool down.

However, I think the risk is worth the reward in this instance, since after all if the pool should get damaged beyond repair, it’s not that big of a deal, as it’s the second backup, not the primary one.

Has anyone ever tried something like this?