No nodes exist in the cluster to schedule the pod
Production Risk
Complete service outage — no pods can be scheduled anywhere in the cluster.
This critical scheduling failure occurs when there are literally no nodes registered and Ready in the cluster. The scheduler cannot place any pod. This happens in newly created clusters where nodes have not joined yet, after a mass node failure, or if all nodes are cordoned. All pods will remain indefinitely Pending until at least one node becomes available.
- 1Cluster was just created and node pool is still initialising
- 2All nodes are cordoned (kubectl cordon) or drained for maintenance
- 3Cluster autoscaler failed to provision new nodes
- 4All nodes entered NotReady state simultaneously due to a network partition
All pods in the cluster are Pending; kubectl get nodes shows no Ready nodes.
kubectl get nodes # No resources found. # or # NAME STATUS ROLES AGE VERSION # node1 NotReady <none> 1d v1.28.0 kubectl describe pod mypod | grep -A 5 "Events:" # Warning FailedScheduling 0/0 nodes are available
expected output
Warning FailedScheduling ... 0/0 nodes are available: 0 node(s) had untolerated taint
Fix 1
Check node status and registration
WHEN Cluster appears to have no ready nodes
kubectl get nodes -o wide kubectl describe nodes | grep -E "Conditions:|Ready"
Why this works
Identifies node status and any conditions preventing nodes from being Ready.
Fix 2
Uncordon nodes that are manually cordoned
WHEN Nodes are cordoned for maintenance
kubectl uncordon <node-name> # Or uncordon all at once kubectl uncordon $(kubectl get nodes -o name | sed 's/node///')
Why this works
Removes the unschedulable taint, allowing the scheduler to place pods on the nodes again.
Kubernetes Documentation
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev