Opinions differ on this, particularly when it comes to systems with a lot of RAM. I typically run without swap, but I’m definitely in the minority. With that said, let me explain the pros and cons:
First of all, swap allows the system to overcommit RAM. This can be very helpful, because applications tend to ask for significantly more RAM than they consume. Let’s say we need to run three apps on a system, cleverly named foo
, bar
, and baz
.
Our memory-hungry applications
Foo
initializes a ton of arrays every time a user runs a query, and frequently the majority of those arrays are never actually written to before they’re destroyed.
Bar
expects that it will have a lot of work to do that requires a lot of RAM, because it’s a specialized app. But it is a specialized app that was originally developed for much more massive workloads than the majority of community consumers task it with. As a result, it allocates 2GiB of RAM every time it’s so much as opened, whether it ever uses that RAM or not.
Baz
is a clever one–it’s another app that expects to want quite a lot of RAM, and so it actually queries your system when first run, and immediately asks for half the system RAM. This is nice for baz
, but it makes the pressure considerably worse on bar
and foo
!
What happens on a wimpy system without swap?
Let’s say you’ve got a system with 8GiB of RAM. Baz
ate 4GiB of that instantly, Bar
ate another 2GiB just as quickly, and that only leaves 2GiB for the rest of the system. Foo
will very likely trigger the out-of-memory killer pretty quickly as it balloons over size due to the way it initializes arrays it never even uses.
Or perhaps the OOM kills off baz
or bar
instead of foo
–in that case, foo
will suddenly expand greedily into the newly-available RAM, and when you attempt to reopen baz
or bar
, they will outright fail with an “unable to initialize memory” error.
What happens on a beefier system without swap?
This system has 16GiB of RAM. It cheerfully allocates 8GiB to baz
, another 2GiB to bar
, and that leaves a solid 6GiB left to distribute between the system, any other apps, and the ever-hungery foo
. No harm no foul, right?
Well… that kinda depends. That system can certainly continue running reasonably effectively, but it’s probably going to have to do without much in the way of filesystem cache–we already know foo
is going to balloon out into that 6GiB pretty aggressively, and if we’re running a desktop, a browser alone might eat the other 4GiB.
Of course, we could be careful about how many browser tabs we open… but even so, we’re not likely to have much RAM for that filesystem cache we mentioned, and without filesystem cache, all read operations have to hit the bare metal. That means your reads are slower, it means your writes are also slower–because they share the same IOPS pool with your reads!–and it means you’re probably not entirely happy with your system’s performance.
OK, now what happens on a wimpy system with swap?
Although our 8GiB host system immediately says “yes” to baz
when it requests 4GiB of RAM and to bar
when it requests another 2GiB, it’s kinda fibbing a bit. In fact, those commits are tallied against unused swap on disk.
When ‘baz’ or ‘bar’ then attempt to write to their allocated “RAM”, typically, the host system immediately re-maps the application’s memory to actual RAM, not swap. This way, the initial commit–which may never be used at all, remember–doesn’t require pushing valuable filesystem cache out of RAM, and may also allow a user to open more applications than the system can technically run, due to those apps’ habit of overcommitting memory… and all that’s without ever writing a single sector to the actual swap!
This starts out swimmingly, but very rapidly the wimpy host will begin actually needing to commit writes to swap, or to send pages of RAM down to swap. Since swap is multiple orders of magnitude higher latency than RAM, and since RAM may literally be accessed every single CPU cycle in many cases, this has dire impact on system performance.
Essentially, once the system begins heavily using swap, it’s likely to enter a ‘swap spiral’ which may take tens of minutes or even hours to wind down, even after all of the application overload is killed off. This usually results in a frustrated user (or highly experienced admin!) electing to just pull the damn power rather than wait all that nonsense out, and cross their fingers and hope that the filesystem manages the crash nicely.
What happens on a beefier system with swap?
Essentially, the same thing–pushing overcommits down to swap means not pushing filesystem cache out of RAM. Of course, if the system is beefy enough, it may never have gotten close to filling the RAM with filesystem cache in the first place!
But this is where we really need to talk about how things can go wrong. With swap enabled, the operating system will occasionally decide to page “inactive” pages of RAM down to swap, essentially “just in case” it has a better use for the RAM that data was occupying.
On Linux distributions, you can tweak that algorithm using the vm.swappiness
kernel tunable, which can be set from 0 to… actually I’m not sure what the upper limit of vm.swappiness
is, because it’s already pretty bonkers at 100.
Anyway, on a beefy system with a ton of RAM, the swap algorithm can actually screw you up pretty badly when it randomly decides to page a bunch of RAM out to swap “just in case you need it later” right when you really could have used a bunch of IOPS that are currently being used for the algorithm’s nonsense.
Similarly, you might provision a system with absolute wads of RAM so that you never have to wait on disk unnecessarily… only to discover that the operating system decided to page one of your active workloads out to disk while you were alt-tabbed away, leaving you with >10 seconds of lurching when you alt-tab back into it!
On paper, the obvious solution is to provision some swap to catch the overcommits, but then set vm.swappiness=0
. Unfortunately, I’ve witnessed the host OS paging stuff down to swap even with >200GiB entirely unused RAM and with vm.swappiness=0
set… so I currently consider that parameter useful to know about if you need swap, but not so useful that it always makes swap worthwhile.
What’s the general consensus?
The majority of IT professionals will gasp in outrage at the idea of running without swap, and tell you that you should never, ever do that, no matter how beefy the system, and that having swap can never hurt performance. They tell you this because it was drummed into them. The majority of them cannot explain swappiness to you, or speak intelligently of overcommits… they’re just positive That’s The Way It Is, because that’s how it was drummed into them.
The better grade of IT professionals can and will explain overcommits, and talk about the potential issues involved if the swap algorithm gets overly aggressive. But the majority of them will still recommend swap, pretty strongly.
I fall into that latter camp–I advise most people to use swap, and particularly to let their operating system manage it, if they don’t actually understand it and have no interest in getting fiddly with memory management themselves. However, I will note to those who very actively manage their systems and provision them so that they’ve got plenty of RAM already, there are real benefits to just avoiding swap usage entirely.
But if you don’t have a solid grasp of how much RAM you use, and where you use it, and how that changes, and want to think about why programs use RAM? Leave it to the system defaults.
Oh yeah, one final warning:
I strongly, strongly recommend against large swap partitions/files, even for newbies and casuals! The recommendation tends to be “a little larger than your RAM, so that you can analyze a full crash dump if you need to.”
Thing is, on a system with 64GiB of RAM, that means writing sixty-four gibibytes of data out to disk, before the poor dying thing is even allowed to halt. And it means that the admin troubleshooting a pesky crashing rig is waiting repeatedly for that entire 64GiB to write out… which they probably don’t want to do, and let’s be honest here, how many times have you analyzed a full on-disk copy of your system’s RAM after a crash?
But wait, it gets worse: that’s the problem when you actually have a crash, which hopefully you don’t have very often. What does happen quite often is your system deciding to page things in and out between RAM and swap… and if you’ve helpfully given it gigantic gobs of swap, it can get you into even more trouble if and when you start needing to actually access all that!
You don’t need more than 2GiB of swap, unless you already know you need that swap, know precisely why you need that swap, and can articulate that clearly… in which case, why are you reading all this anyway? 
(Sorry, I phrase things that way because to someone sufficiently expert with a sufficiently niche case, there is always a reason to do any terrible-practice thing which can be individually justified. But that doesn’t change what is or is not general good practice for normal users and use cases.)