Immich + ZFS Issues

Has anybody used Immich with ZFS? I don’t know if ZFS is the problem but I’m having issues since switching to it.

On my Debian 12 machine using Yunohost, I was running Immich through docker for about 2 years. The photos were stored in a mdadm RAID10 with ext4 fileystem. I never had problems.

In the past week or so, I switched to ZFS and I am having issues with Immich not being able to find the storage. If I poke the ZFS storage, suddenly it finds the files.

  • Couldn’t find thumbnails
  • I changed a docker environment variable from /mnt/hermes to /mnt/hermes/ and it worked
  • After about 7 days, I am having the same issue
  • I thought the problem was fixed, but it isn’t.

Now, what seems to be the problem?

  • Are ZFS drives going to sleep?
  • Do I need to adjust settings in Debian or Docker?
  • Maybe is this an Immich problem?

Details on Server
four internal 3.5 inch drives, not in usb enclosures

  pool: hermes
 state: ONLINE
  scan: resilvered 692K in 00:00:01 with 0 errors on Sat Aug 17 22:00:15 2024
config:

	NAME                                   STATE     READ WRITE CKSUM
	hermes                                 ONLINE       0     0     0
	  mirror-0                             ONLINE       0     0     0
	    ata-TOSHIBA_HDWD130_82VGAX9AS      ONLINE       0     0     0
	    ata-TOSHIBA_DT01ABA300V_80VZ0X5AS  ONLINE       0     0     0
	  mirror-1                             ONLINE       0     0     0
	    ata-TOSHIBA_DT01ABA300V_80VZ0VHAS  ONLINE       0     0     0
	    ata-ST3000DM007-1WY10G_ZFN38ALM    ONLINE       0     0     0

errors: No known data errors

docker is pointing to /mnt/hermes/immich-photos

$ zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
hermes                 1.62T  3.69T   128K  /mnt/hermes
hermes/immich-photos    349G  3.69T   349G  /mnt/hermes/immich-photos

I’d appreciate suggestions or wild speculation. Thank you!

I have been thinking about adding Immich to my selfhosted services too. How do you like it? I saw they’ve recently added 5 star rating EXIF info and are working on a “folder view” similar to Synology photos- these 2 things make it very appealing to me.

To confirm- you are not using NFS, correct? I.e., the machine running the immich docker stack is also the machine running the ZFS pool?

What do the immich logs say?

If you bash into the immich container- can you ls to see the files you expect to see?

Are your permissions set correctly?

I have been thinking about adding Immich to my selfhosted services too. How do you like it?

It’s really great. I don’t miss Google Photos at all. There are some papercuts along the way, but it’s better than Photoprism in my opinion. I was running that for a few years before going to Immich. There are occasional breaking changes but the release notes are detailed and there is a good community using it (though they use discord yuck!).

To confirm- you are not using NFS, correct?

Not using NFS.

I.e., the machine running the immich docker stack is also the machine running the ZFS pool?

Immich is running on same machine as the zfs pool.

What do the immich logs say?

Things are normal now. When it can’t find the files it complains about that but after ‘poking immich’ (running a machine learning job, or changing docker-compose.yml a bit and running again), it finds the folders again.

If you bash into the immich container- can you ls to see the files you expect to see?

The files are there.

Are your permissions set correctly?

I believe so because it works and then doesn’t. It’s an intermittent problem.


I have some more thoughts. When I was doing mdadm, I was doing nightly backups using restic. This means the drive was being used every night.

Since switching to ZFS, I haven’t re-enabled the nightly restic backups. This is why I think it’s something about drives ‘going to sleep’ or …

Another strange thing though, is this ZFS pool has other datasets on there that run jellyfin. I was even using Jellyfin yesterday afternoon when I was having problems with Immich. So that means the drive should be awake, unless ZFS sleeps datasets? or …

I’m very new to ZFS (less than a month), so I’m trying to understand if there is something I don’t know about. Maybe a tuning setting in debian? docker? or zfs?

1 Like

Immich is in constant development and make break.

I am using Immich within docker with read only access to a ZFS pool for photos and all its thumbnails and configs are stored on ZFS too. I am not going 100% into any app, so my pictures are synced with Syncthing and picked up by Immich. My docker server is using one hard drive - I am using ZFS to spot drive problems not for redundancy, as I have another backup server with all bells.

As for file access problems, I believe I am having no issues with ZFS. However, some files sometimes show broken thumbnails and some videos do no stream, but as a whole works well. I am not worried about file integrity on Immich as I use this for viewing. My pictures are kept seperately. You never know if Immich will be forgotten or another better application comes along.

What does this involve?

1 Like

Poking ZFS involved recreating the container.

  1. In one case I changed .env file UPLOAD_LOCATION from /mnt/hermes to /mnt/hermes/
  2. In another case, I added the healthcheck to docker-compose.yml and did docker compose up -d

Well, it’s good to know others have no problems with ZFS and Immich. I guess I just poke it occasionally. I think if I setup restic nightly backup again, it might help.

This may be an issue, but I cannot confirm because I do not use the upload feature. Immich has access to my pictures folder which is updated externally.

ZFS doesn’t make the Disks sleep just like EXT4 doesn’t. It’s just a filesystem. Adding the forwardslash to the end of the path has nothing to do with ZFS but to do with parth of the directory. If you put the path back and remove the forwardslash does it not work anymore?

The server could be making Disks sleep but I don’t believe Debian would do this. If the Disks were USB they would probably fall asleep. And when you do zfs list you have to wait a few seconds for them to wake up and then the command lists datasets. Does that happen? If not, Disks aren’t sleeping.

Also why were you resilvering? Just wondering.

Also what did you use to transfer the photos, Immich CLI? Just wondering.

Anyway, when you say you’re poking ZFS you’re not really poking ZFS, but do try and keep trying to figure out more and then post again, but there isn’t really any concrete proof or explanation in your post. One guy asked you if your permissions are set correctly, ‘I believe so’ isn’t an answer. You should post the output of ls -al so we can see the permissions, but if you’re sure then you’re sure, assuming well is assuming. Most likely immich wouldn’t be able to write to the dataset if it didn’t have permissions, but still post the permissions.

When you say ‘i poke zfs and it finds the files’ do you actually mean you can’t see photos in immich at all but after ‘ping’ you can see images in Immich? I guess you can see all of the files in the console all the time? Someone asked this but it wasn’t clear. And I don’t mean by going into docker exec, I mean of the ZFS.

How do you have the storage setup, you are using Storage Template? So all of your images retain their filenames and you can list them?

Yeah that’s a lot of questions but none of this is clear.

When you migrated how did you do that? Is you setup the same as before or different?

I don’t think this is a ZFS or Debian issue. If the disks are falling alseep look into BIOS settings or idk, I’m not an expert as I don’t have these problems unless Disks in the ZFS pool are USB.

sudo hdparm -C /dev/sda this worked on my Ubuntu Laptop and it said drive state is: active/idle try that when there’s no activity on the disk, but chances are there’s always activity.

I use camcontrol to check this.

It looks like it’s available on Debian. I didn’t find one for Ubuntu.

https://manpages.debian.org/unstable/freebsd-utils/camcontrol.8.en.html

I don’t think you’re having this problem though.

By the way, how is your RAM and your ARC. You can check arc with this arcstat(8) — zfsutils-linux — Debian testing — Debian Manpages but it’s probably at 50% by default so it doesn’t matter.

Ok, that’s good to know.

No, zfs list shows up quickly.

I had just added another vdev mirror, so I went from 2 disks, to 4 disks in the process of switching over to zfs.

I used rsync -avh to move disks from my degraded RAID10 to the new zfs pool.

The second time this happened (after fixing it with trailing slash / last time) this is the log I got. Immich couldn’t find the file or thumbnails from the drive for some unknown reason.

[Nest] 16  - 08/26/2024, 9:20:25 AM   ERROR [Api:LoggerRepository~216bldu7] Unable to send file: Error
Error: ENOENT: no such file or directory, access 'upload/thumbs/1b8c5c23-d175-4544-8d04-226670fa9a60/39/af/39af5efb-4d55-4eb9-bce3-92de566e91c9.webp'
    at async access (node:internal/fs/promises:606:10)
    at async sendFile (/usr/src/app/dist/utils/file.js:55:9)
    at async AssetMediaController.viewAsset (/usr/src/app/dist/controllers/asset-media.controller.js:57:9)
[Nest] 16  - 08/26/2024, 9:20:25 AM    WARN [Api:ExpressAdapter~216bldu7] Content-Type doesn't match Reply body, you might need a custom ExceptionFilter for non-JSON responses
[Nest] 16  - 08/26/2024, 9:20:25 AM   ERROR [Api:ExceptionsHandler~216bldu7] ENOENT: no such file or directory, access 'upload/thumbs/1b8c5c23-d175-4544-8d04-226670fa9a60/39/af/39af5efb-4d55-4eb9-bce3-92de566e91c9.webp'
Error: ENOENT: no such file or directory, access 'upload/thumbs/1b8c5c23-d175-4544-8d04-226670fa9a60/39/af/39af5efb-4d55-4eb9-bce3-92de566e91c9.webp'

Screenshot_from_2024-08-18_10-39-54

I think the permissions are right because I think it wouldn’t work at all if permissions were wrong.

total 44
drwxr-xr-x  7 root root  7 Oct 19  2023 .
drwxr-xr-x 10 root root 10 Aug 28 21:11 ..
drwxr-xr-x  4 root root  4 Oct 18  2023 encoded-video
drwxr-xr-x  4 root root  4 Oct 18  2023 library
drwxr-xr-x  3 root root  3 Oct 19  2023 profile
drwxr-xr-x  4 root root  4 Oct 18  2023 thumbs
drwxr-xr-x  4 root root  4 Oct 18  2023 upload

When Immich can’t see the files, I can see still see them by browsing through the directories. Then it works after restarting the container. This feels like a super strange edge case that is between docker-immich-zfs.

Like you say, I can’t do much at the moment because it is working right now. I wonder what I should capture/screenshot to investigate this again when it does (or hopefully doesn’t) happen again?

Just adding a data point. Have been using Immich on RAIDz for over half a year without issues. OS is Ubuntu 22.04 LTS, Immich is running in docker compose. Recently moved it to its own dataset for better performance (recordsize tuning) and replication. Didn’t have any problems before or after.

2 Likes

Yeah if it works now it works. For all we know the database wasn’t updated after you rsynced the data and Immich didn’t know it had files there until DB was updated. When you upload files using immich-cli the database gets updated as soon as an image is uploaded. Not the case with rsync. But yeah I really have no idea on those errors

1 Like