Samba Performance for MacOS Clients (TrueNAS Scale)

A less-technical friend of mine was looking for a NAS, so I pointed him at the TrueNAS mini and told him I would help him set it up. Set up went swell (we set up TrueNAS scale because he needed to have access to his music by SMB and over Plex, and I don’t know how to do that in Core), but he uses MacOS and we found that due to Apple “Thinking Different” about how standard protocols should work, SMB share operations are abysmally slow. It was about 10MB/s maximum over a wired connection. My linux laptop could do nearly 100MB/s over wifi.

Some searching led me to find this is apparently a known issue and that Samba has a “fruity mode” that should fix this, but I have yet been unsucessful in enabling that mode or making any other changes on his client or the server to increases speeds.

We tried using NFS, but that process is not user-friendly at all on MacOS. He dropped 2 grand to have a good NAS experience and so far this isn’t it. Does anyone have any experience making SMB tolerable for Mac Clients using TrueNAS Scale?

1 Like

As you e probably seen, this is a pretty well documented problem. We use Macs at my office and this is my least favorite part of the experience.

Here are the recommendations from the Samba wiki, which I think are helpful.

There are also client-side things that can help, like disabling writing .DS_Store files to the shares. Check this out.

If you can bear it, make changes one by one and test that way you’ll know what is actually making a difference.

2 Likes

Thank you so much. I’ll check out these recommendations and see if I can make some headway. I found reference to the fruity mode, but couldn’t find anything this specific, so I appreciate it.

Not that this is an acceptable excuse, but I spent many years administering airgapped datacenters, and have found that my google-fu has suffered greatly.

I maintain Samba servers for a broad mix of computer clients and, while I agree that Samba on the macOS side isn’t as easy as it can be on other OSes, it is possible to get highly performant access to that data.

In my smb.conf file, as part of the [Global] section, I include some tweaks alleged to make macOS performance better:

fruit:aapl = yes
fruit:model = MacSamba
vfs objects = catia fruit streams_xattr
fruit:nfs_aces = no
fruit:zero_file_id = yes
fruit:metadata = stream
fruit:encoding = native

And, in the definitions for each share I also include:

veto files = /._*/.DS_Store/
delete veto files = yes

(This are more for my sanity than anything else…)

I can confirm that none of these changes negatively impacted performance on Windows and Linux clients. And, I can likewise confirm that these changes did improve my SMB access from 40MB/s to closer to 65MB/s.

Recently though I made a change to my primary Mac workstation that has me rethinking my testing. All of my testing had been done via a wired connection to my M1 Pro Macbook Pro via a CalDigit ‘USB-C Pro’ dock (which is actually a Thunderbolt 3 dock, and connected to my Mac via Thunderbolt). This is the machine I saw the 33% speed bump.

I have, for unrelated reasons, upgraded my home network to 2.5Gb and bought a cheap USB-C 2.5Gb adapter, which I’ve since connected to the dock and have stopped using the network jack built into the dock.

I expected this not to impact Samba share performance, and made no attempt to even test, but in my normal activities I was transferring some files from the Samba server (Debian 12 with a ZFS ‘RAID10’ of striped mirrors) and saw reliable performance around 110Mb/s. Essentially, saturating the gigabit connection from the server.

I may find the time to do some further testing - I’m curious to know if this network adapter would perform this well when connected at gigabit speed too. I’m also wondering if my speed limits were more a function of the dock, and if I’d see similar performance with other USB connected NICs.

Just my two cents…

1 Like

Interesting. I can saturate a 2x10 Gbe SMB MultiChannel connection between my Macs (2018 Mini and 2022 M1 Studio) and an NVMe pool on my NAS (Ugreen 6800 Pro running 24.10 when copying large files over SMB. That’s using the on-board 10gbe ports on the Macs plus a TB4 10Gbe adapter. No SMB tuning done at all.

Maybe macOS is overly chatty and that goes away with fast connections?

1 Like

Thanks for the links and suggested configs. I’m going to have to take a closer look at these.

Lately, I’ve been trying to track down why Samba Server Side Copy isn’t enabled by default in TrueNAS for Mac clients (a fruit flag is required in the config, which is missing from TrueNAS’s smb4.conf and can’t easily be set in the GUI or in a way tha survives reboot, not without a post-boot script that makes some API calls to manually enable the thing).

It turns out that due to the way Apple implements SMB, using Server Side Copy from a Mac client potentially risks unstable SMB transfers happening for … reasons that are off-topic, but that’s a whole other rabbit-hole. Here’s a Samba dev explaining why you probably want to leave SSC for Macs disabled: [Samba] Documentation/Feature Clarification Request: Server Side Copy and VFS_FRUIT

There are two styles of SSC:

  • the “normal” protocol style called copy-chunk, where the copy is
    requested in IO ranges by the client and performed server-side
  • the Apple way enabled by fruit:copyfile where the client requests the
    whole file to be copied in one request to be performed by the server

The problem with the latter is that for large file the copy takes some
time and meanwhile the client is blocked waiting for IO to complete. If
the copy takes longer then the SMB request timeout time (iirc default
30s) the requests times out and the client will disconnect the connection.

My recommendation is to stay away from fruit:copyfile for these reasons.

But the thing I wanted to mention was that during the whole process of trying to understand why TrueNAS was configured the way it is, I learned from one of the devs’ comments on their forums that they don’t have automated SMB protocol testing for Mac or Windows clients to the same degree that they do for Linux clients: SMB (Samba) Server-Side Copy Support: Enabled for Mac OS? - #38 by Captain_Morgan - TrueNAS General - TrueNAS Community Forums

Its a great example of where Community testing is needed.

WE can automate testing of linux clients using standard protocols.

When it comes to the matrix of testing each client OS and each feature… its not automated. We rely on the Community to test and report.

We do have basic testing of Windows and Macs… but not feature-level testing and OS version testing. That matrix is enormous and changes for every update they do.

So, from that, I think it’s reasonable to assume that TrueNAS’ defaults aren’t optimized for Mac use, and we’re own our own and assuming all the risk if we try to change them. Not that we shouldn’t be able to do that, but I think it’se reasonable to expect that iX might not be able to help if something goes wrong.

Maybe macOS is overly chatty and that goes away with fast connections?

I think this is exactly it. Aside from potential optimization issues, modern macOS versions largely assume fast random storage IO; that is, they essentially assume storage IO is happening with SSDs rather than hard drives (this is evident in the way APFS stores metadata, for example; it scatters the metadata across the disk rather than more contiguously [source]). macOS is likely poorly optimized for HDD-based IO, even over a network (NAS).

APFS (the default file system for modern MacOS) is optimized for performance and minimal wear on SSDs. It’s been known since at least 2017 that the cost for this is poor performance on spinning rust HDDs.

I’m sure that has implications for reading and writing to HDD-based Samba shares, but Apple’s SMB client implementation itself (SMBX) is also known to be less well-optimized and featureful than a Samba (Linux) or native Windows client.

1 Like

This isn’t going to really make an impact, if APFS is being run in a VM on top of ZFS storage (or on ZFS storage via iSCSI), because ZFS will nod its head sagely at the “location” APFS asks to put sectors/clusters, and go right the hell about writing them wherever ZFS would normally write them anyway, since ZFS is essentially presenting the APFS filesystem with a fake, virtual block address map that ZFS itself translates down to the actual physical sectors of the drives in question.

With that said, if you’re using zvols, you’re going to lose a LOT of performance right there. Zvol performance sucks, a lot. You’re generally much better off using a sparse file (eg truncate -s 100G hundredgig.raw) rather than a zvol.

Make sure to set recordsize (or, if you insist on using zvols, god help you, volblocksize) appropriately to your workload–set it too low, you amplify IOPS demands of a relatively easy workload that otherwise could get more throughput; set it too high, you wind up with read/write amplification that will cripple more difficult, high-IOPS workloads.