Media Server/Jellyfin

Hello

I’m very much so a beginner when it comes to servers, linux and ZFS, so please bare with me.

I have a server I had created back in 2020 and it’s worked good (enough) but I wanted to get a more robust understanding of my setup, see where improvements can be made or figure out which things will be issues that I have to be aware of.

Current Setup:
i7-9700k
RTX 2060
64GB 2666 (non-ECC) (I thought I had 128GB but looking looking at the specs of my mobo, I’m at work currently, it says it caps out at 64GB)
LSI 9300-16i
4TB (x8) WD Enterprise (media)
512GB nvme (OS)
1000W PSU

OS info:
Ubuntu (not remembering if I’m using 22 or 24)
ZFS Raidz1
Jellyfin

I don’t do much else on the server, I also don’t use containers (I haven’t learned much about them and haven’t felt like it would be needed, yet). I’ve restarted the server more than once either do to running a command that broke something or running an update that somehow corrupted the kernel and I couldn’t figure out how to get everything running smoothly. I say that to say, re-imaging hasn’t been a slow process (outside of re-doing my jellyfin config setup).

Questions I have:

  1. I don’t run my server 24/7, only when I want to watch something. Is it hurting my drives/server that I don’t have it on all day? The fact that I turn it on and off more than once a week?
  2. I know that I benefit from ECC RAM but would I benefit greatly from upgrading my CPU, RAM speeds or even using ECC, for my use case?
  3. Are there any commands worth looking into that improve the quality of the server or ZFS? I have set up my compression a certain way (again, at work, so I don’t fully remember how to verbalize that at the moment).
  4. Is it worth creating a scratch disk for my server (I have extra drives, SSD drives as well, SATA ports as well as ports on the PCIe card)
  5. Should I think about improving my GPU? At the moment, I live alone, so I never stream more than 1 device at a time. For what I have, is the only benefit for the streams? It seems to do 4K DV with DTS Master HD just fine on certain films.
  6. Are there more inexpensive GPU’s worth looking at if the 2060 ever shits the bed or I need to repurpose it for anything? My main concern is streaming 4K DV and being able to handle any audio channels each movie/tv show has (I only use 5.1 systems at the most). I have gotten conflicting information on what minimum GPU is needed for do all of that with no issues.
  7. Is my 1000W PSU overkill for everything I have? Is anything else on my build overkill for my use case?

For me, it’s hard to offer any guidance without know what this system isn’t doing.

Getting a better understanding of ZFS/containers may help you make your setup more durable to failure. Even something like rsnapshot to protect your jellyfin setup might be a help there if you’re unable or unwilling to set that up in such a way that ZFS snapshots can help you.

In terms of 24x7 versus powering it down.. if you’re using for 10% or less of the hours in a week, you’re saving a lot of money powering it down. Unless you’re powering it up and down several times each day, I’d not worry about it. It’s a 6 year old system, so what you’re doing has not wrecked the system :wink:

For your use, I’d say ECC is a nice to have, not a requirement. For my setup, I have WAY less performance than you do, and stream to more simultaneous endpoints. 3+ is not unusual. But, in all cases, I have no need to transcode… I just use direct play. You might consider your playback device before your server upgrades?

Commands? syncoid :slight_smile: Assuming you’re doing regular scrubs, if it’s working, it’s working. A one user system needs little optimization. Backups though, are always a fun area to spend some time. ZFS is awesome, and snapshots are great, but they are not backups.

That being said, a 8 disk array with only one disk of redundancy does make me nervous! If you’re looking to refresh your disks I might consider moving to fewer larger disks, and thinking a bit more about redundancy. Depending on how full your array is, I think you have some options for improvements there.

Why would you want a scratch disk? I have a separate mirror pool for data that comes and goes a lot, and that I care little about, but it you’re on spinning rush .. I’d not be too worried. What would you want to use that disk for? I’ve used a ‘scratch’ pool for things like cache data for containers, or high-speed ingest temp locations … but without a clear use-case don’t make your life more complex.

Can’t speak to the GPU side, since I’m not using one.

PSU is based on what the PC is using, and making accommodations for peak power. A 2060 is going to pull some juice, but you could do the math, or get a Kill-o-Watt or something, to give you a better idea of power usage. I don’t like to be at the edge of a PSU, but there is a band in which they are most efficient.

My advice would be to identify areas where the system is letting you down, and solve those. Don’t worry so much about chasing ‘tech fads’ or ‘common advice’. ZFS/Linux/etc has so much scope for learning - Find an avenue you enjoy go wild!

Thank you for the advice.

Let me answer some of your questions…

  1. The devices are as follows: nVidia Shield Pro; FireStick (1080p media or lower); Main desktop computer (1440p media or lower).
  2. I don’t know if I’d want a scratch disk. I’ve looked up the concept before and I wasn’t 100% sure if I would need one or not for my use case. Since I have extra drives, that’s why I’m asking. I ado understand the idea of not spending more than I need if everything I need is being addressed.
  3. After I posted I realized my PSU might actually be overkill. I need to recheck a PSU calculator but I think I only really needed maybe 700 or 750W for my setup. Since I plan to replace my 4TB drives with a 10TB array of 6 or 8 (I plan to get a server rack case and I also wanted to make sure I had headroom in case I decide to start throwing all of my music into the server) I think 1000W will be right.
  4. I wanted to be preventative because I already had a situation a few years ago where one of my drives failed, had no idea which since none of them were labeled correctly and realized that I never checked the health of the drives before setting ZFS up. So, I went from 2TB (x16) to 4TB (x8) and checked the health of all of them beforehand (soooooooo long of a process). I don’t feel the need to go crazy but I don’t know what I don’t know so I figured, let me save myself the headache by asking those who most likely know way more than I would on this subject.
  5. Last question. If the RAM I have is overkill, what would be the sweet spot since the only other plan I have for this build (if the opportunity comes along) is to do a self-hosted security cam setup for home use.

5.) I’d say making the move to ECC for your setup is overkill, but having more RAM is usually a good idea for ZFS.

That being said, if you’re running the system for a couple hours at a time, ZFS likely doesn’t have the time to fill up all that memory to give you the max performance it’s capable of. You should be able to monitor RAM use on your system and see how well ZFS is doing in that regard. Ubuntu and Jellyfin aren’t going to use a ton that 64GB, so you’ll see ZFS start to soak it up as the system runs. I also have 64GB in my main home server, and when the system boots VMs and containers start and soak up what they need and then, over the next few hours, ZFS just keeps going.. I have a line graph of a fairly constant rate going up and up until it hits the limits set.

(Just did a quick check - My system takes 18GB of RAM to get to ‘everything is running’. Then, ZFS soaks up another 42GB over about 6hrs to ‘fill’ the RAM cache.)

I expect my server to be using at least 60GB of RAM all the time. For you, if the purpose of that storage is mainly video files you play back one at a time, the saturation of your RAM may never hit it’s limit.

This is an area where keeping the system on can improve performance .. but if performance isn’t an issue I’d still rather save the money :slight_smile:

My first ZFS arrays were built on machines with 2GB of non-ECC RAM, and 8TB of storage. Worked great for me in my homelab, hosting my data, media, backups from other sources, etc.

When I put in a ZFS array to use as iSCSI storage for a couple hundred VMs at work, I built the system very differently.

For someone still building up their experience with ZFS and linux, it can be hard to understand the ‘gotta haves’ for all ZFS storage, and the ‘gotta haves for enterprise’ ZFS storage. But, you’ve found a great spot to learn and ask questions!

Hopefully, you’ll get some tractions from others on here too with their own experiences.

Hello

So far what you’ve provided is greatly appreciated. Thank you.

Yes, though it’s not like you need to replace it. Your drives are going to eat about 2-5W apiece while idle, maybe 8W each under load, and as much as 15-20W each while spinning up during boot.

The CPU and GPU will be roughly idle while the drives spin up at boot, so you’re really only worried about the drives under heavy load–give them a little fudge factor, call it 10W each, and you’re only looking at 80W for the drives.

Your 9700K is generally going to pull around 160W under extreme load. Possibly 200W, if overclocked.

The RTX2060 is generally going to draw a maximum of about 175W. But call that 200W also.

That gives you 200W + 200W + 80W–around 480W estimated maximum power consumption, with the system in full on hairdryer mode for both CPU and GPU and with significant load on the drives.

You would probably be okay with even a 500W PSU, but I’d typically reach for about a 700W PSU here. That buys you some additional overhead as the components in the PSU age, and it might no longer be capable of supplying quite as much current as it did when brand new.

Do you mean a temporary destination for BitTorrent downloads? If so, no, you don’t need that–just make sure recordsize=1M on the dataset your torrents download to.

If that’s not what you mean, please clarify.

Yes, it’s harder on the machine than running 24/7.

But if you’re happy with what you’re doing now, you are saving a bit of power. Idle draw at the wall on that box is probably going to be somewhere around 60-80W.

Round it up, call it 100W. That’s .1 KW, or 2.4KWh per day. Where I live, that’s roughly thirty cents per day. If you only have the server powered on eight hours per day instead of 24, you’re saving about ten cents a day–$36.50 a year.

That isn’t nothing… But you can almost certainly find better ways to save more power, like refreshing the trim strips around your doors and windows, hanging blackout curtains (or reflective film) in the summer on windows that get a ton of solar heat, updating heavy appliances like refrigerators and dryers and air conditioners, that kind of thing.

Still, if you’re happy, you’re happy. Is frequent shutdown and startup harder on your machine than 24/7 ops? Definitely. But it’s not enough to say you MUST change your ways. Maybe you value the power savings and/or quiet when the machine isn’t running more than you value a potential extra year or two on the server life–really it’s down to what you prefer.

ECC is always a good idea, in any machine. But you don’t need it any more in this machine than you do on a desktop. The lack of ECC is precisely as sub optimal on a desktop as it is on a server.

In a better world, we’d all be using ECC for everything, because it’s better and eliminates or mitigates a few classes of failure. Unfortunately, we live in a world where Intel abused an effective monopoly to use ECC as a differentiator between SKUs, so they could juice businesses for more money than consumers would be willing to pay, but without losing the consumer business.

So, you have to decide: are you happy with the level of reliability of a desktop PC, or is preventing a crash a year or so worth diving into the excitingly limited and expensive world of server grade motherboards, processors, and RAM?

I usually go server grade. But thanks to the aforementioned market manipulation, it’s enough of a pain in the ass that sometimes even I still pick consumer hardware. As much as I would prefer it if ECC RAM was the only option.

Sigh.

If all you’re doing is jellyfin, 64GiB is plenty, and so is your 9th Gen i7.

Thanks for your advice. I agree and understand everything else you mentioned. As far as turning off the server. It’s only because I don’t want to spend so much on electricity. So it’s more of a fear then anything. I’d rather either wait until I live somewhere with solar panels or I make enough money that I won’t really care.

As far as the mention about a scratch disk. I read, a few years back, when I was planning out this server that you might need a scratch disk for caching. To either improve Linux performance or to improve the performance of overall read/write speeds. If I’m wrong on that, that’s fine, I just wanted a better understanding of what it was and if it could be utilized in my use.

Nah, I think you’re remembering advice regarding BitTorrent downloads, to avoid the extreme fragmentation that protocol can produce on disk by copying the downloaded torrents off the “scratch disk” to its permanent location after it finishes downloading.

You can avoid that issue with traditional filesystems by using a client that supports preallocation, and enabling that feature–this ensures that no matter how randomly the pieces are downloaded, they’ll be written in the correct order to be read or streamed from sequentially.

Preallocation doesn’t work with ZFS, because ZFS is copy on write. But if you set recordsize=1M on your torrent download dataset, fragmentation cannot get worse than the equivalent of 1MiB random I/O–which is already nearly indistinguishable from “sequential” workloads in terms of storage load and performance, so that’s that.

In your case, you might actually want to consider recordsize=4M. Not much point in that on mirrors or a single drive, since those topologies already see 1M random I/O per disk at recordsize=1M.

But with an eight wide Z1, recordsize=1M is only storing around 160KiB per disk for each block–which means that the per disk random access pattern is 160K, not 1MiB.

That’s still a huge improvement over the essentially 20K random I/O you’re doing with BitTorrent on the default recordsize of 128K, mind you. But bumping it all the way up to recordsize=4M gets you better than 512K random per drive–still not quite the near-perfect workload that 1M random per drive gets you, but pretty damn close.

The last note is that an eight wide Z1 is really not a very good idea. What you probably should have gone with here is an eight wide Z2. The difference in safety and performance (it’ll perform slightly better than your Z1 because 8-wide Z2 doesn’t require block padding, and eight wide Z1 does) probably is worth tearing everything down and rebuilding.