nodeSelector or nodeAffinity does not match any node
Production Risk
Pod never starts; workload requiring specific hardware is completely unavailable.
This scheduling failure occurs when a pod's nodeSelector or nodeAffinity rules cannot be satisfied because no node in the cluster has the required labels. The pod remains indefinitely Pending. This is a common misconfiguration when deploying workloads intended for specific node pools (e.g., GPU nodes, spot instances) that are not available.
- 1nodeSelector specifies a label that does not exist on any node
- 2nodeAffinity requiredDuringSchedulingIgnoredDuringExecution rules are unsatisfiable
- 3Node pool with required label was deleted or not yet created
- 4Label was removed from nodes after the deployment was created
Pod stays Pending with FailedScheduling event mentioning node selector or affinity.
kubectl describe pod mypod | grep -A 10 "Events:" # Warning FailedScheduling 0/3 nodes are available: # 3 node(s) didn't match Pod's node affinity/selector. kubectl get nodes --show-labels | grep gpu
expected output
Warning FailedScheduling ... 3 node(s) didn't match Pod's node affinity/selector.
Fix 1
Verify node labels match the selector
WHEN Pod has a nodeSelector or nodeAffinity
# Show what the pod requires
kubectl get pod mypod -o jsonpath='{.spec.nodeSelector}'
kubectl get pod mypod -o yaml | grep -A 20 affinity:
# Show labels on all nodes
kubectl get nodes --show-labelsWhy this works
Identifies the label mismatch between the pod spec and node labels.
Fix 2
Label a node to satisfy the selector
WHEN The node pool exists but is missing the required label
kubectl label node <node-name> gpu=true # Verify kubectl get node <node-name> --show-labels | grep gpu
Why this works
Adds the required label to a node, making it eligible for the pod to be scheduled.
Kubernetes Documentation
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev