Startup probe failed
KubernetesERRORNotableContainer ErrorHIGH confidence

Container killed before startup probe passed

Production Risk

Application restarts before it can serve traffic; slow-start services are particularly vulnerable.

What this means

The startup probe protects slow-starting containers by disabling liveness and readiness probes until the startup probe succeeds. If the startup probe fails beyond its failureThreshold, Kubernetes kills the container. This is most commonly seen with legacy applications or JVM-based services that have long warm-up times.

Why it happens
  1. 1Application takes longer to start than failureThreshold × periodSeconds allows
  2. 2Startup probe endpoint path or port is wrong
  3. 3Application crashes during the startup phase before the probe endpoint becomes available
How to reproduce

Container restarts during startup phase; events show startup probe failures.

trigger — this will error
trigger — this will error
kubectl describe pod mypod | grep -A 5 "Events:"
# Warning  Unhealthy  kubelet  Startup probe failed: HTTP probe failed

kubectl describe pod mypod | grep -A 10 "Startup:"

expected output

Warning  Unhealthy  ...  Startup probe failed: HTTP probe failed with statuscode: 000

Fix

Increase startup probe failureThreshold

WHEN Application starts slowly but reliably

Increase startup probe failureThreshold
startupProbe:
  httpGet:
    path: /healthz
    port: 8080
  failureThreshold: 30   # 30 × 10s = 5 minutes max startup time
  periodSeconds: 10

Why this works

Allows the application up to failureThreshold × periodSeconds seconds to complete startup.

Sources
Official documentation ↗

Kubernetes Documentation

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All Kubernetes errors