Media Server/Jellyfin

Hello

I’m very much so a beginner when it comes to servers, linux and ZFS, so please bare with me.

I have a server I had created back in 2020 and it’s worked good (enough) but I wanted to get a more robust understanding of my setup, see where improvements can be made or figure out which things will be issues that I have to be aware of.

Current Setup:
i7-9700k
RTX 2060
64GB 2666 (non-ECC) (I thought I had 128GB but looking looking at the specs of my mobo, I’m at work currently, it says it caps out at 64GB)
LSI 9300-16i
4TB (x8) WD Enterprise (media)
512GB nvme (OS)
1000W PSU

OS info:
Ubuntu (not remembering if I’m using 22 or 24)
ZFS Raidz1
Jellyfin

I don’t do much else on the server, I also don’t use containers (I haven’t learned much about them and haven’t felt like it would be needed, yet). I’ve restarted the server more than once either do to running a command that broke something or running an update that somehow corrupted the kernel and I couldn’t figure out how to get everything running smoothly. I say that to say, re-imaging hasn’t been a slow process (outside of re-doing my jellyfin config setup).

Questions I have:

  1. I don’t run my server 24/7, only when I want to watch something. Is it hurting my drives/server that I don’t have it on all day? The fact that I turn it on and off more than once a week?
  2. I know that I benefit from ECC RAM but would I benefit greatly from upgrading my CPU, RAM speeds or even using ECC, for my use case?
  3. Are there any commands worth looking into that improve the quality of the server or ZFS? I have set up my compression a certain way (again, at work, so I don’t fully remember how to verbalize that at the moment).
  4. Is it worth creating a scratch disk for my server (I have extra drives, SSD drives as well, SATA ports as well as ports on the PCIe card)
  5. Should I think about improving my GPU? At the moment, I live alone, so I never stream more than 1 device at a time. For what I have, is the only benefit for the streams? It seems to do 4K DV with DTS Master HD just fine on certain films.
  6. Are there more inexpensive GPU’s worth looking at if the 2060 ever shits the bed or I need to repurpose it for anything? My main concern is streaming 4K DV and being able to handle any audio channels each movie/tv show has (I only use 5.1 systems at the most). I have gotten conflicting information on what minimum GPU is needed for do all of that with no issues.
  7. Is my 1000W PSU overkill for everything I have? Is anything else on my build overkill for my use case?

For me, it’s hard to offer any guidance without know what this system isn’t doing.

Getting a better understanding of ZFS/containers may help you make your setup more durable to failure. Even something like rsnapshot to protect your jellyfin setup might be a help there if you’re unable or unwilling to set that up in such a way that ZFS snapshots can help you.

In terms of 24x7 versus powering it down.. if you’re using for 10% or less of the hours in a week, you’re saving a lot of money powering it down. Unless you’re powering it up and down several times each day, I’d not worry about it. It’s a 6 year old system, so what you’re doing has not wrecked the system :wink:

For your use, I’d say ECC is a nice to have, not a requirement. For my setup, I have WAY less performance than you do, and stream to more simultaneous endpoints. 3+ is not unusual. But, in all cases, I have no need to transcode… I just use direct play. You might consider your playback device before your server upgrades?

Commands? syncoid :slight_smile: Assuming you’re doing regular scrubs, if it’s working, it’s working. A one user system needs little optimization. Backups though, are always a fun area to spend some time. ZFS is awesome, and snapshots are great, but they are not backups.

That being said, a 8 disk array with only one disk of redundancy does make me nervous! If you’re looking to refresh your disks I might consider moving to fewer larger disks, and thinking a bit more about redundancy. Depending on how full your array is, I think you have some options for improvements there.

Why would you want a scratch disk? I have a separate mirror pool for data that comes and goes a lot, and that I care little about, but it you’re on spinning rush .. I’d not be too worried. What would you want to use that disk for? I’ve used a ‘scratch’ pool for things like cache data for containers, or high-speed ingest temp locations … but without a clear use-case don’t make your life more complex.

Can’t speak to the GPU side, since I’m not using one.

PSU is based on what the PC is using, and making accommodations for peak power. A 2060 is going to pull some juice, but you could do the math, or get a Kill-o-Watt or something, to give you a better idea of power usage. I don’t like to be at the edge of a PSU, but there is a band in which they are most efficient.

My advice would be to identify areas where the system is letting you down, and solve those. Don’t worry so much about chasing ‘tech fads’ or ‘common advice’. ZFS/Linux/etc has so much scope for learning - Find an avenue you enjoy go wild!

Thank you for the advice.

Let me answer some of your questions…

  1. The devices are as follows: nVidia Shield Pro; FireStick (1080p media or lower); Main desktop computer (1440p media or lower).
  2. I don’t know if I’d want a scratch disk. I’ve looked up the concept before and I wasn’t 100% sure if I would need one or not for my use case. Since I have extra drives, that’s why I’m asking. I ado understand the idea of not spending more than I need if everything I need is being addressed.
  3. After I posted I realized my PSU might actually be overkill. I need to recheck a PSU calculator but I think I only really needed maybe 700 or 750W for my setup. Since I plan to replace my 4TB drives with a 10TB array of 6 or 8 (I plan to get a server rack case and I also wanted to make sure I had headroom in case I decide to start throwing all of my music into the server) I think 1000W will be right.
  4. I wanted to be preventative because I already had a situation a few years ago where one of my drives failed, had no idea which since none of them were labeled correctly and realized that I never checked the health of the drives before setting ZFS up. So, I went from 2TB (x16) to 4TB (x8) and checked the health of all of them beforehand (soooooooo long of a process). I don’t feel the need to go crazy but I don’t know what I don’t know so I figured, let me save myself the headache by asking those who most likely know way more than I would on this subject.
  5. Last question. If the RAM I have is overkill, what would be the sweet spot since the only other plan I have for this build (if the opportunity comes along) is to do a self-hosted security cam setup for home use.

5.) I’d say making the move to ECC for your setup is overkill, but having more RAM is usually a good idea for ZFS.

That being said, if you’re running the system for a couple hours at a time, ZFS likely doesn’t have the time to fill up all that memory to give you the max performance it’s capable of. You should be able to monitor RAM use on your system and see how well ZFS is doing in that regard. Ubuntu and Jellyfin aren’t going to use a ton that 64GB, so you’ll see ZFS start to soak it up as the system runs. I also have 64GB in my main home server, and when the system boots VMs and containers start and soak up what they need and then, over the next few hours, ZFS just keeps going.. I have a line graph of a fairly constant rate going up and up until it hits the limits set.

(Just did a quick check - My system takes 18GB of RAM to get to ‘everything is running’. Then, ZFS soaks up another 42GB over about 6hrs to ‘fill’ the RAM cache.)

I expect my server to be using at least 60GB of RAM all the time. For you, if the purpose of that storage is mainly video files you play back one at a time, the saturation of your RAM may never hit it’s limit.

This is an area where keeping the system on can improve performance .. but if performance isn’t an issue I’d still rather save the money :slight_smile:

My first ZFS arrays were built on machines with 2GB of non-ECC RAM, and 8TB of storage. Worked great for me in my homelab, hosting my data, media, backups from other sources, etc.

When I put in a ZFS array to use as iSCSI storage for a couple hundred VMs at work, I built the system very differently.

For someone still building up their experience with ZFS and linux, it can be hard to understand the ‘gotta haves’ for all ZFS storage, and the ‘gotta haves for enterprise’ ZFS storage. But, you’ve found a great spot to learn and ask questions!

Hopefully, you’ll get some tractions from others on here too with their own experiences.

Hello

So far what you’ve provided is greatly appreciated. Thank you.