UNAVAILABLE
gRPCERRORNotableAvailabilityHIGH confidence

The service is currently unavailable

What this means

Indicates the service is temporarily unable to handle the request. This is a classic transient condition and clients should retry, ideally with backoff.

Why it happens
  1. 1The server is down for maintenance or deployment.
  2. 2A downstream dependency of the service is unavailable.
  3. 3The server is overloaded and cannot accept new connections or a proxy cannot reach any healthy backends.
How to reproduce

A client attempts to call a gRPC service that is currently offline or restarting.

trigger — this will error
trigger — this will error
// gRPC client code example
try {
  const response = await client.myMethod(request);
} catch (e) {
  if (e.code === grpc.status.UNAVAILABLE) {
    console.error(e.message);
  }
}

expected output

StatusCode.UNAVAILABLE: The service is currently unavailable

Fix 1

Implement Retry with Exponential Backoff

WHEN When the service is temporarily overloaded or restarting.

Implement Retry with Exponential Backoff
// Retry with backoff
for (let attempt = 0; attempt < 5; attempt++) {
  try {
    return await client.myMethod(request);
  } catch (e) {
    if (e.code === grpc.status.UNAVAILABLE && attempt < 4) {
      const delay = 100 * Math.pow(2, attempt);
      await new Promise(resolve => setTimeout(resolve, delay));
    } else {
      throw e;
    }
  }
}

Why this works

Exponential backoff avoids overwhelming the service by increasing the delay between retries, giving it time to recover.

Fix 2

Use a Service Mesh with Automatic Retries

WHEN For a platform-level solution in a microservices environment.

Use a Service Mesh with Automatic Retries
// This is a configuration change in a service mesh like Istio or Linkerd.
// Example Istio VirtualService config:
http:
- route:
  - destination:
      host: my-service
  retries:
    attempts: 3
    perTryTimeout: 2s

Why this works

A service mesh can automatically handle retries for UNAVAILABLE errors, making individual clients more resilient without code changes.

What not to do

Retry immediately in a tight loop without backoff

Causes a 'thundering herd' problem and can prevent a recovering service from coming back online.

Sources

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All gRPC errors