Init:OOMKilled
KubernetesCRITICALCommonPod StateHIGH confidence

Init container was killed due to out-of-memory

Production Risk

Application permanently blocked; database migrations or setup tasks cannot complete.

What this means

Init:OOMKilled means the init container exceeded its memory limit and was killed by the Linux kernel OOM killer before it could complete. Since init containers must run to completion before the pod proceeds, an OOM-killed init container will loop (Init:CrashLoopBackOff) and the application will never start.

Why it happens
  1. 1Init container memory limit is set too low for the operation being performed (e.g., database migration)
  2. 2Memory leak in the init container logic
  3. 3Large data processing task in the init phase that exceeds the limit
How to reproduce

Pod stuck with Init:OOMKilled or cycling between Init:OOMKilled and Init:CrashLoopBackOff.

trigger — this will error
trigger — this will error
kubectl describe pod mypod | grep -A 5 "Init Containers:" | grep -A 5 "Last State:"
# Last State: Terminated  Reason: OOMKilled  Exit Code: 137

expected output

Last State:     Terminated
  Reason:       OOMKilled
  Exit Code:    137

Fix

Increase init container memory limit

WHEN Init container performs a legitimate memory-intensive task

Increase init container memory limit
initContainers:
- name: db-migrate
  image: myapp:migrate
  resources:
    requests:
      memory: "256Mi"
    limits:
      memory: "512Mi"

Why this works

Providing adequate memory headroom prevents the OOM killer from terminating the init container.

Sources
Official documentation ↗

Kubernetes Documentation

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All Kubernetes errors