Linux OOM Killer

Stop AI Crashes: The Linux OOM-Killer Protocol

Stop waking up to 'Killed' terminal messages. Master the Linux kernel protocol to tune bare metal memory limits, fix PyTorch RAM spikes, and secure enterprise AI training.

What triggers the OOM-Killer during AI training?

The Linux Out-Of-Memory (OOM) killer automatically terminates processes when system RAM is exhausted. During heavy AI model training, if PyTorch tensor allocations exceed the physical memory of your dedicated server, the kernel issues a SIGKILL (signal 9) to prevent total system panic.

You configure a massive Large Language Model (LLM) fine-tuning job. The GPUs are roaring, and you leave the server running overnight. The next morning, you check your terminal only to find a single, devastating word: Killed. Your 12 hours of compute time are gone. This is not a hardware failure; this is the Linux kernel stepping in as an executioner.

The Root Cause: PyTorch & DataLoaders

Most sysadmin guides assume OOM events are caused by Apache or MySQL memory leaks. However, in an AI infrastructure context, the root causes are fundamentally different:

  • The DataLoader Duplication: Setting a high num_workers in PyTorch DataLoaders spins up multiple subprocesses. If not managed via shared memory (shm), each worker creates a full copy of the dataset in system RAM, causing exponential memory spikes.
  • Tensor Offloading: Frameworks like DeepSpeed utilize CPU RAM to offload optimizer states when GPU VRAM is full. This sudden surge in host memory demand easily triggers the OOM-Killer.

To verify that the OOM-Killer (and not a Python bug) was responsible for the crash, interrogate the kernel logs directly:

# Search the kernel ring buffer for OOM execution logs
dmesg -T | egrep -i 'killed process'

# Output Example:
# [Tue Mar 24 14:32:11 2026] Out of memory: Killed process 14592 (python3) total-vm:1980996kB...

Step 1: Tuning vm.overcommit_memory & ratio

By default, the Linux kernel uses a "Heuristic Overcommit" strategy. It lies to applications, promising them memory blocks that don't physically exist. We must change this policy to "Strict" mode (Value 2).

The Ratio Trap (Critical)

If you set overcommit_memory=2, you MUST also change the overcommit_ratio. By default, Linux sets this ratio to 50%. This means the kernel will block your AI model from using more than 50% of your physical RAM, causing instant crashes. We must increase this ratio to 100%.

Overcommit ValueKernel BehaviorAI Infrastructure Recommendation
0 (Default)Heuristic Overcommit (Promises fake memory).Not Recommended. Leads to sudden OOM panics during epoch changes.
1Always Overcommit (Never denies a memory request).Extremely Dangerous. Will certainly crash the entire OS.
2Strict Overcommit (Only allocates what exists).Highly Recommended. Frameworks throw catchable MemoryError instead of SIGKILL.

*Strict mode ensures AI frameworks handle allocation limits gracefully.


# Set Strict Overcommit AND allow 100% RAM allocation
sudo sysctl -w vm.overcommit_memory=2
sudo sysctl -w vm.overcommit_ratio=100

# Make the changes persistent across reboots
echo "vm.overcommit_memory=2" | sudo tee -a /etc/sysctl.conf
echo "vm.overcommit_ratio=100" | sudo tee -a /etc/sysctl.conf

Step 2: The Docker OOM Bypass

If you are running your workloads inside containers (e.g., NVIDIA NGC containers, Ollama), you can instruct the Docker daemon to shield your specific container from the kernel's executioner using cgroups.

# Run your AI container with OOM-kill disabled
docker run --gpus all --oom-kill-disable -d my-ai-model

# Advanced: Manually adjust the OOM score of a running PID to -1000 (Immunity)
echo -1000 | sudo tee /proc/<PID>/oom_score_adj

The Immunity Warning

Setting oom_score_adj to -1000 makes your AI process immortal. If it severely leaks memory, the kernel will be forced to kill other critical system processes (like SSHd or systemd) to survive, effectively locking you out of the server.

Next Step: Clear the Zombie GPU Memory

Did the OOM-Killer terminate your script, but your GPU is still showing 100% VRAM usage? The kernel likely lost its PID mapping. Read our enterprise guide on How to Fix Zombie VRAM and Clear GPU Memory.

  The Bare Metal Advantage

Why do AI models randomly crash on Cloud VMs even when you have enough RAM? It is called Memory Ballooning. Cloud hypervisors dynamically steal "idle" RAM from your VM to give to other tenants. When your PyTorch DataLoader suddenly spikes, the hypervisor cannot return the RAM fast enough, triggering a fatal OOM kill. ServerMO Bare Metal guarantees 100% dedicated, unshared DDR5 RAM. No ballooning, no oversubscription—just uninterrupted tensor processing.

Stop sharing your RAM. Deploy true hardware.

OOM Diagnostics FAQ

Why does PyTorch trigger the Linux OOM-Killer?

PyTorch often triggers the OOM-Killer because of the DataLoader's multiprocessing architecture. Setting a high num_workers value duplicates the dataset in system RAM. When the total memory requested exceeds the physical RAM and swap space, the kernel terminates the Python process with SIGKILL.

What is the best vm.overcommit_memory setting for AI training?

For enterprise AI training, the recommended setting is vm.overcommit_memory = 2 (Strict Overcommit). This prevents the Linux kernel from promising memory it does not have, forcing the AI framework to handle memory allocation errors gracefully instead of facing an abrupt OOM-Killer termination.

How do I protect my Docker AI container from being killed?

You can protect critical Docker containers by executing them with the --oom-kill-disable flag and adjusting the host-level oom_score_adj metric to a negative value, instructing the kernel to prioritize killing other less critical processes first.

Ready to Launch with Unmatched Power?

Ready to Launch with Unmatched Power? Deploy blazing-fast 1–100Gbps unmetered servers, high-performance GPU rigs, or game-optimized hosting custom-built for speed, reliability, and scale. Whether it’s colocation, compute-intensive tasks, or latency-critical applications, ServerMO delivers. Order now and get online in minutes, fully secured, fully optimized.

Red and white text reads '24x7' above bold purple 'SERVICES' on a white background, all set against a black backdrop. Energetic and modern feel.

Power. Performance. Precision.

99.99% Uptime Guarantee
24/7 Expert Support
Blazing-Fast NVMe SSD

Christmas Mega Sale!

Unwrap the ultimate power! Get massive holiday discounts on all Dedicated Servers. Offer ends soon grab yours before the snow melts!

London UK (15% OFF)
Tokyo Japan (10% OFF)
00Days
00Hrs
00Min
00Sec
Explore Grand Offers