Quickly find per-process and system file descriptor limits and stop "Too many open files" from breaking services. 06.01.2026 | reading time: 3 min Ever seen a service fail with "Too many open files" and wondered where to start? This short guide shows how to inspect per-process and kernel-wide file descriptor limits and how to act when limits are too low. A service hits the ceiling A daemon crashes with file access errors; first step is to check limits and current usage, for example: ```bash\n$ ulimit -n\n1024\n$ sudo sysctl -w fs.file-max=200000\nfs.file-max = 200000\n$ lsof | wc -l\n3587\n$ prlimit --pid 1234 --nofile\nNOFILE 1024 4096\n``` This shows a per-process soft limit of 1024, a higher kernel-wide cap, and about 3.5k files currently open on the host, which helps pinpoint whether the problem is per-process or global. Soft vs hard, user vs system Limits come in two flavors: soft (what the process sees) and hard (the maximum it can raise to); `ulimit -n` reports the shell soft limit, `prlimit` inspects running processes, and `/proc/sys/fs/file-max` or `sysctl fs.file-max` reports the kernel cap; remember that changing `ulimit` in a shell is temporary and persistent changes need `limits.conf` or systemd tweaks. Where to make fixes To raise limits persistently for services use `/etc/security/limits.conf` or a drop-in with `LimitNOFILE=` for systemd services, then restart the service; to increase the kernel-wide ceiling set `sudo sysctl -w fs.file-max=NUM` or edit `/etc/sysctl.conf` and reload; small, surgical increases are safer than open-ended values. Fast triage tools When diagnosing, list open files with `lsof`, count sockets with `ss` or `netstat`, inspect a process's limits with `prlimit`, and check a systemd service's configured limit with `systemctl show --property=LimitNOFILE myservice.service`; combine these to decide whether to tune the process, the service unit, or the kernel. Final diagnosis and next steps Start by reproducing the failure under controlled load, measure open-file usage with `lsof` and `proc` data, apply the smallest persistent limit bump that removes the failure, and automate monitoring to detect regressions early; keep learning and aim for certifications like CompTIA Linux+ or LPIC-1, and consider intensive exam preparation at bitsandbytes.academy. Join Bits & Bytes Academy First class LINUX exam preparation. processes utilities troubleshooting