Container keeps crashing and restarting
A pod is stuck in a crash loop. Kubernetes starts the container, it terminates, and Kubernetes repeatedly tries to restart it with an increasing backoff delay.
- 1Application exits immediately due to a fatal startup error
- 2Missing required environment variable or ConfigMap key
- 3Liveness probe fails repeatedly causing forced restarts
- 4Incorrect command or arguments in the container spec
A pod starts, crashes within seconds, and Kubernetes applies exponential backoff before restarting.
kubectl get pods
expected output
NAME READY STATUS RESTARTS AGE mypod 0/1 CrashLoopBackOff 5 3m
Fix 1
Inspect the previous container logs
WHEN To understand why the container exited
kubectl logs mypod --previous --tail=50
Why this works
The --previous flag shows logs from the terminated container before the current restart.
Fix 2
Check environment variables and mounted configuration
WHEN When the app fails due to missing configuration
kubectl describe pod mypod
Why this works
Describe shows events and env bindings, which can reveal missing references that cause container exit.
Fix 3
Exec into a running container for debugging
WHEN If the container runs long enough before crashing
kubectl exec -it mypod -- /bin/sh
Why this works
Provides an interactive shell to check file paths, network connectivity, and application state.
✕ Set restartPolicy: Never to stop the crash loop
This hides the root cause. You should fix the underlying application error instead.
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev