Debian 9.4, Linux 4.9
I sometimes compile something that hardly fits in the RAM, or a rouge process suddenly starts eating memory beyond what’s available. When the process goes past the available RAM, Linux starts thrashing the disk even though I have zero swap enabled (no swap was an attempt to avoid this). I guess it starts discarding and reloading stuff like the
mmapped parts of the binaries that are currently running?
At this point my X session quickly becomes unresponsive, and all I can do is wait dozens of minutes until the entire X session gets killed and I can log back in.
I tried to search around for solutions, but nothing seems to work. The OOM killer doesn’t catch this process and with
vm.overcommit_memory=2 I can’t even log in with GDM and Gnome.
Is there a way to tell Linux not to swap at all? That way I would at least get a chance that the rouge process will be killed by a failed
malloc, and even if not, at least I wouldn’t need to wait while staring at an unresponsive machine.
Or any other hints how to manage this scenario?
If you are compiling sources that require almost all the available RAM, if not more, probably the only performant solution is adding real RAM.
Having said that, you may try adding a very large amount of swap (say 2x or 3x the RAM) and set
/proc/sys/vm/swappiness to a low value, like 1 (note that with kernel 3.5+ setting it to 0 totally disables swap), so that swap is used only if effectively necessary. This should minimize thrashing.