Keep runaway programs in check by restricting what a shell and its child processes may consume. 16.11.2025 | reading time: 2 min On a Linux shell `ulimit` is the quickest way to cap what a process may use so a single buggy program cannot exhaust files, memory, or CPU time, and this guide shows how to inspect, change, and persist limits. See it in action Try this safe demo in a shell to observe a file descriptor limit causing a program to fail ```bash $ ulimit -n 1024 $ ulimit -n 10 $ python3 - <<'PY' import os files=[] for i in range(20): files.append(open('/dev/null')) print(i) PY Traceback (most recent call last): File "<stdin>", line 4, in <module> OSError: [Errno 24] Too many open files ``` Common options that matter The most useful flags are `-n` open files, `-u` max processes, `-c` core size, `-v` address space, and `-f` file size; remember soft limits can be raised up to the hard limit by the user and only root can raise hard limits, so choose values that prevent outages while allowing normal workloads. Make limits stick for users and services For interactive shells persist limits in `/etc/security/limits.conf` or per-user files and for services set limits in systemd with directives such as `LimitNOFILE` or use `prlimit` to adjust limits for a running process; system startup and PAM determine which limits are applied to a login session. When ulimit is not enough When you need hierarchical quotas, per-cgroup accounting, or fine grained CPU and memory isolation reach for cgroups or systemd slices because ulimit enforces per-process resource caps but cannot shape or limit whole groups of processes over time. Final step Mastering `ulimit` is a small but powerful step toward system reliability; learn how limits interact with init systems and containers next and consider formal study like CompTIA Linux+ or LPIC-1 with intensive exam preparation at bitsandbytes.academy to deepen skills. Join Bits & Bytes Academy First class LINUX exam preparation. utilities security processes scripting troubleshooting