Container killed by failing liveness probe
Production Risk
Repeated container kills cause service interruptions and increased RESTARTS count.
When the liveness probe fails consecutively beyond failureThreshold, Kubernetes kills and restarts the container. The liveness probe is intended to detect a deadlocked or unrecoverable application state. However, misconfigured liveness probes (too sensitive thresholds, wrong endpoint) can cause unnecessary container kills and service disruption.
- 1Application entered a deadlocked state and cannot respond to the liveness endpoint
- 2Liveness probe endpoint path or port is misconfigured
- 3timeoutSeconds or failureThreshold set too low causing probe to fail during normal load spikes
- 4Downstream dependency failure causes the liveness endpoint to return errors
Container restarts repeatedly; events show liveness probe failures.
kubectl describe pod mypod | grep -A 5 "Events:" # Warning Unhealthy kubelet Liveness probe failed: HTTP probe failed with # statuscode: 500 kubectl get events --field-selector involvedObject.name=mypod,reason=Unhealthy
expected output
Warning Unhealthy ... Liveness probe failed: HTTP probe failed with statuscode: 500
Fix 1
Test liveness endpoint from inside the container
WHEN Probe may be hitting the wrong path or port
kubectl exec mypod -- wget -qO- http://localhost:8080/healthz kubectl describe pod mypod | grep -A 10 "Liveness:"
Why this works
Verifies the probe endpoint is accessible and returning a healthy response.
Fix 2
Tune probe thresholds to avoid false positives
WHEN Probe fails during legitimate load spikes
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 5Why this works
Higher failureThreshold and timeoutSeconds tolerate transient slowness without killing the container.
✕
Kubernetes Documentation
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev