Pod was evicted from the node
Production Risk
Frequent evictions indicate that nodes are undersized for their workloads, leading to service instability. If many pods are evicted, it could cause an outage.
The kubelet on a node has proactively terminated a pod to reclaim resources. This happens when the node is under pressure for memory, disk, or other resources.
- 1Node running out of memory (memory pressure)
- 2Node running out of disk space (disk pressure)
- 3The sum of pod resource requests exceeds the allocatable resources on the node
A pod disappears from a node and `kubectl get pods` shows its status as Evicted.
kubectl get pods -A | grep Evicted
expected output
NAMESPACE NAME READY STATUS RESTARTS AGE default my-pod-7b5b7f9d5d-abcde 0/1 Evicted 0 5m
Fix 1
Check node resource usage
WHEN To confirm if the node is under memory or disk pressure
kubectl describe node my-node-123
Why this works
The describe output for a node includes its current resource usage, allocations, and conditions like MemoryPressure or DiskPressure, which explain why an eviction occurred.
Fix 2
Set appropriate resource requests and limits
WHEN Pods are being evicted due to using more resources than requested
kubectl edit deployment my-app
Why this works
Setting accurate resource requests and limits for your deployments helps the scheduler place pods on nodes that have sufficient capacity, reducing the chance of eviction.
✕ Delete the evicted pod manually
Evicted pods are garbage collected automatically. Deleting it removes the evidence of why the eviction happened, making it harder to debug the root cause.
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev