Resource limit exceeded
Production Risk
Common in high-concurrency services; set ulimits in service configuration, not just the interactive shell.
When a process exceeds a resource limit set by ulimit, the kernel either sends a signal (SIGXCPU, SIGXFSZ, SIGSEGV for stack overflow) or returns ENOMEM/EFBIG to the offending syscall. Scripts fail with the exit code of the killed or erroring process.
- 1ulimit -n (open files) exceeded — too many open file descriptors
- 2ulimit -v (virtual memory) exceeded — process uses too much memory
- 3ulimit -s (stack size) exceeded — deep recursion or large stack allocations
- 4ulimit -u (processes) exceeded — cannot fork new processes
Script opens too many files, hitting the ulimit -n limit.
#!/bin/bash
# Check current limits
ulimit -n # open files limit (commonly 1024)
# Hit the file descriptor limit
for i in $(seq 1 2000); do
exec {fd}<>/tmp/testfile$i
doneexpected output
bash: /tmp/testfile1025: too many open files Exit: 1
Fix 1
Increase the file descriptor limit
WHEN A service opens many files or connections
# In the script or service unit file: ulimit -n 65536 # Permanently via /etc/security/limits.conf: # myuser soft nofile 65536 # myuser hard nofile 65536 # For systemd services: # [Service] # LimitNOFILE=65536
Why this works
Increasing the fd limit allows more simultaneous open files; set via ulimit, limits.conf, or systemd LimitNOFILE.
Fix 2
Close file descriptors when done
WHEN Script accumulates open fds in a loop
#!/bin/bash for file in /data/*.log; do process_file "$file" # Ensure any fds opened in process_file are closed # (use subshells to auto-close fds) done
Why this works
Fds are a finite resource; always close them explicitly rather than relying on process exit.
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev