Container ran out of memory
Production Risk
If not addressed, repeated OOMKilled events can lead to service instability and unpredictable performance. A widespread issue could exhaust node resources.
The container process was terminated by the operating system because it consumed more memory than its allocated limit. Kubernetes reports this state after the fact.
- 1The memory limit set in the pod spec is too low for the application
- 2The application has a memory leak and its usage grew over time
- 3A sudden spike in traffic or workload caused a temporary surge in memory consumption
A pod that was running correctly suddenly restarts. Describing the pod shows the reason for the last termination was OOMKilled.
kubectl describe pod my-leaky-app
expected output
Last State: Terminated Reason: OOMKilled Exit Code: 137
Fix 1
Increase the container memory limit
WHEN The application's normal memory footprint is higher than the current limit
kubectl patch deployment my-app-deployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-app","resources":{"limits":{"memory":"512Mi"}}}]}}}}'Why this works
This command directly updates the memory limit for the specified container in the deployment, triggering a rollout with the new resource allocation.
Fix 2
Profile the application for memory leaks
WHEN Memory usage continually grows over time without stabilizing
kubectl port-forward my-app-pod 8080:8080
Why this works
Forwarding the port of a profiling tool (like pprof for Go) exposed by the application allows you to connect to it from your local machine to analyze memory usage.
✕ Remove the memory limit entirely
An unlimited container can consume all memory on the node, potentially causing node instability and affecting all other pods running on it.
k8s.io/api/core/v1 — ContainerStateTerminated
Managing Resources for Containers ↗Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev