Pending (Insufficient memory)
KubernetesWARNINGNotableSchedulingHIGH confidence

Pod cannot be scheduled due to insufficient memory

Production Risk

Prevents application scaling and can cause deployment failures. If existing pods crash and their replacements cannot be scheduled, it could result in a full service outage.

What this means

The Kubernetes scheduler cannot find a node with enough free memory to meet the pod's memory request. The pod will remain in a pending state until resources are freed up or more are added.

Why it happens
  1. 1The pod requests more memory than any single node has available
  2. 2All nodes are at their memory capacity due to other running pods
  3. 3Node affinity or anti-affinity rules constrain the pod to a set of nodes that lack memory
How to reproduce

A pod stays in the Pending state, and describing it shows an insufficient memory error from the scheduler.

trigger — this will error
trigger — this will error
kubectl describe pod high-memory-pod

expected output

Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  3m (x15 over 6m)     default-scheduler  0/3 nodes are available: 3 Insufficient memory.

Fix 1

Inspect node memory resources

WHEN To verify how much memory is available across the cluster

Inspect node memory resources
kubectl top nodes

Why this works

The `top nodes` command shows the current memory usage and capacity for each node, making it easy to spot which nodes are under pressure.

Fix 2

Adjust the pod's memory request

WHEN The pod's memory request is unnecessarily high

Adjust the pod's memory request
kubectl edit deployment my-app-deployment

Why this works

Manually edit the deployment to lower the `resources.requests.memory` value for the container. This may allow the pod to be scheduled on a node with limited free memory.

Fix 3

Enable cluster autoscaling

WHEN Running in a cloud environment and wanting to automatically add capacity

Enable cluster autoscaling
gcloud container clusters update my-cluster --enable-autoscaling --min-nodes=3 --max-nodes=10

Why this works

Cloud-specific commands can configure the cluster to automatically add new nodes when it detects pending pods due to resource constraints, then remove them when not needed.

What not to do

Set the pod's memory request to 0

This makes scheduling decisions difficult for Kubernetes and can lead to the pod being placed on a node where it will be immediately OOMKilled if it uses any significant amount of memory.

Sources
Official documentation ↗

k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/fit.go

Assign Memory Resources to Containers and Pods

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All Kubernetes errors