Hands-on steps to create, limit and read cgroup resource metrics so you can watch a process under control. 31.12.2025 | reading time: 3 min Control groups (cgroups) let you measure and constrain CPU, memory and I/O per group; this short guide shows how to enable controllers, move a process into a cgroup and read the files that report usage so you can observe real resource accounting. Hands-on demo Follow these steps as root (or prefix with sudo) to try cgroup v2 monitoring; the first block shows commands, the second shows example output. ``` # list available controllers sudo cat /sys/fs/cgroup/cgroup.controllers # enable cpu and memory controllers at the root and create a child cgroup sudo sh -c 'echo "+cpu memory" > /sys/fs/cgroup/cgroup.subtree_control' sudo mkdir -p /sys/fs/cgroup/demo # start a CPU burner in background and note the printed PID: replace 12345 below with that PID yes > /dev/null & # replace 12345 with the PID shown by your shell sudo sh -c 'echo 12345 > /sys/fs/cgroup/demo/cgroup.procs' # limit CPU to roughly 50% and then read stats sudo sh -c 'echo "50000 100000" > /sys/fs/cgroup/demo/cpu.max' sudo cat /sys/fs/cgroup/demo/cpu.stat sudo cat /sys/fs/cgroup/demo/memory.current ``` ``` # sample cpu.stat output usage_usec 250000 user_usec 200000 system_usec 50000 nr_periods 1000 nr_throttled 2 throttled_usec 5000 # sample memory.current output 209715200 ``` Reading the numbers Files in the cgroup directory are the API: `cpu.stat` shows total microsecond usage plus throttle counters, `memory.current` reports bytes in use, and `memory.events` gives OOM and reclaim events; check `cpu.pressure` for CPU pressure metrics and `io.stat` for block I/O accounting to get a full picture. Tuning and limits Set limits by writing to control files: `echo "50000 100000" > cpu.max` to restrict CPU quota, `echo 300000000 > memory.max` to cap memory in bytes, and `echo "8:0 rbps=1048576 wbps=1048576" > io.max` to throttle I/O; remember changes are immediate and affect all PIDs in the cgroup. Where it fits Use cgroups for containers, transient systemd scopes, batch jobs and debugging a runaway process; move one PID into a dedicated cgroup to isolate behavior, or instrument many services to compare real consumption under load. Tools that help Short interactive tools and libraries make monitoring easier: `systemd-cgtop` for live top-like views, `libcgroup` utilities for scripted setups, and container runtimes expose cgroup metrics via higher-level commands. Final step Start experimenting with small cgroups on a disposable VM, compare raw files under `/sys/fs/cgroup` with systemd views, and then expand into orchestration or performance tuning; deepen your Linux skills and consider formal certification such as CompTIA Linux+ or LPIC-1 and intensive exam prep at bitsandbytes.academy. Join Bits & Bytes Academy First class LINUX exam preparation. processes utilities infrastructure troubleshooting virtualization