What is the (OOM) Killer in Linux

What is OOM Killer?

The OOM killer, a feature enabled by default, is a self-protection mechanism employed the Linux kernel when under severe memory pressure.

If the kernel cannot find memory to allocate when it’s needed, it puts in-use user data pages on the swap-out queue, to be swapped out. If the Virtual Memory (VM) cannot allocate memory and can’t swap out in-use memory, the Out-of-memory killer may begin killing current userspace processes. it will sacrifice one or more processes in order to free up memory for the system when all else fails.

If you have a line like below in /var/log/messages

Apr 1 00:01:02 srv01 kernel: Out of Memory: Killed process 2592 (oracle).

this means that the OOM killer has killed the oracle dedicated server process 2592.

The behavior of OOM killer in principle is as follows:

  • Lose the minimum amount of work done
  • Recover as much as memory it can
  • Do not kill anything actually not using a lot memory alone
  • Kill the minimum amount of processes (one)
  • Try to kill the process the user expects to kill

What Causes OOM Killer Event?

When troubleshooting why the Out of memory (OOM) killer process starts up, one must look at a few factors on the system. Generally, OOM killer starts because of a handful of reasons:

Reason Probable Cause
1 Spike in memory usage based on a load event (additional processes are needed for increased load).
2 Spike in memory usage based on additional services being added or migrated to the system. (Added another app or started a new service on the system)
3 Spike in memory usage due to failed hardware such as a DIMM memory module.
4 Spike in memory usage due to undersizing of hardware resources for the running application(s).
5 There’s a memory leak in a running application.

 

Other Potential Reasons

It is also possible for the system to find itself in a sort of deadlock. Writing data out to disk may itself, require allocating memory for various I/O data structures. If the system cannot find even that memory, the very functions used to create free memory will be hamstring and the system will likely run out of memory.

It is possible to do some minor tuning to start paging earlier, but if the system cannot write dirty pages out fast enough to free memory, one can only conclude that the workload is mid-sized for the installed memory and there is little to be done. Raising the value in /proc/sys/vm/min_free_kbytes will cause the system to start reclaiming memory at an earlier time than it would have before. This makes it harder to get into these kinds of deadlocks. If you get these deadlocks, this is a good value to tune.

Alternatively, the kernel might have made a bad decision and misread its statistics. It went OOM while it still had good RAM to use sometimes. This would be a kernel BUG that will need to be fixed.

How to Avoid OOM Situation?

Basically, it is not an OS problem when OOM error occurs. It is a kind of tactic for the kernel to keep the system running. Please add more extra memory or increase swap if physical memory/swap space is too low on the outdated box. Another solution may be to move some of the applications from a problematic system.

Alleviate the memory constraint by making additional swap memory available. This can be done by adding a swap partition or a swap file to the system. A swap partition is more preferable because it has higher performance than a swap file.

Additionally, one can increase the frequency of the SAR data recorded on the system. By default, it is set to gather data every 10 minutes. It can be increased to every 1 minute if desired. This can help gather more granular performance statistics to aid in troubleshooting and trend analysis.

How does the OOM-Killer select a task to kill?

When a system runs out of memory and OOM-killer is needed to set memory free in order to allow new memory allocations, any process in the system’s task list is a candidate to be killed. This does not necessarily mean that the process which has the biggest virtual memory area will be a direct first hit, however, it has its chances of being killed as well.

When the routine OOM-killer is called, it iterates over the task list calculating a score, called ‘badness’, for each process enlisted there. In its context the badness calculator utilizes a fairly simple formula that tries to select the better task to kill in a system following this logic:

  • try to lose the minimum amount of work done;
  • permit to recover a large amount of memory;
  • try to not kill anything innocent of allocating a large amount of memory;
  • try to kill the minimum amount of tasks at a time (one process is always the goal);
  • try to kill what end-users expect the kernel should kill;

When OOM-killer is calculating a given process badness, the first thing taken into account is the process total virtual memory size (vm), in pages, and this number will serve as the base for all further OOM-killer ‘badness’ score calculations. While one might think that processes which have big “vms” are prone to be first selected by OOM-killer, the calculations also take into consideration other data to adjust a process score, thus giving to it a chance of not being killed even if it is the biggest memory allocator in the box.

Below, these are other things that will adjust a process ‘badness‘ score:

  • Forked childs: for each child which has its own ‘vm’, a half of that vm is accounted to a task points score;
  • CPU time: a task gets its score reduced by a factor rendered from the integer part of the square root of its CPU running time. So last longing tasks or tasks runing often are adjusted to score less, even if they are allocating big chunks of memory;
  • Niced processes: a niced process is considered less important, so they get their score doubled;
  • Superuser processes: As they are usually considered more important, they get their score reduced by a factor of 4;
  • External adjustment: System administrator can do external adjustments to this calculations, in order to prevent oom-killer to kill an undesirable process which otherwise would be killed in a memory shortage.

No token or token has expired.

Leave a Reply

Your email address will not be published. Required fields are marked *