Node is under memory pressure — pods may be evicted
Production Risk
Pod evictions disrupt running workloads; PodDisruptionBudgets may be violated.
The MemoryPressure node condition is set to True when the kubelet detects that available memory on the node is below the eviction threshold. When this condition is active, the scheduler will not place new pods on the node, and the kubelet may begin evicting lower-priority pods to reclaim memory.
- 1Total memory usage across all pods on the node is approaching the node's physical limit
- 2One or more pods have memory leaks causing unbounded memory growth
- 3Memory limits are not set on pods, allowing unrestricted memory consumption
- 4Node's memory capacity is too small for the workload density
Node condition MemoryPressure=True; pods may be evicted; new pods not scheduled on this node.
kubectl describe node mynode | grep -A 5 "Conditions:" # MemoryPressure True ... KubeletHasInsufficientMemory kubectl top nodes kubectl top pods --all-namespaces --sort-by=memory | head -20
expected output
MemoryPressure True ... KubeletHasInsufficientMemory
Fix 1
Identify top memory consumers
WHEN Node is under memory pressure
kubectl top pods --all-namespaces --sort-by=memory | head -20 kubectl describe node mynode | grep -A 30 "Allocated resources:"
Why this works
Identifies which pods are consuming the most memory on the node.
Fix 2
Set memory limits on pods without them
WHEN Pods are running without memory limits
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"Why this works
Memory limits prevent any single pod from consuming all available node memory.
Kubernetes Documentation
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev