Find where RAM, caching or swapping slow a host with practical commands and clear interpretation. 24.01.2026 | reading time: 3 min Systems get slow for many reasons; memory pressure is a common and often hidden cause — this short guide shows which commands to run, how to read their outputs, and what to do next. A telling symptom and the quick checks A web node grows sluggish under load; the admin starts by checking overall memory and swap to confirm pressure, for example using these commands and their outputs: ```bash $ free -m total used free shared buff/cache available Mem: 7973 7420 100 123 452 200 Swap: 2047 1024 1023 $ vmstat 1 3 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa 2 1 1048576 100 452000 900000 20 40 10 20 120 250 50 30 10 10 $ ps aux --sort=-rss | head -n 6 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND www-data 1234 5.0 50.0 2000000 400000 ? Ssl 10:00 20:00 /usr/bin/python3 app.py ``` Read the signals and prioritize If `free -m` shows tiny "available" memory and `Swap` usage is high, the system is likely swapping; `vmstat` reports nonzero "si" and "so" when pages move to and from swap, which kills responsiveness; next check which processes own the largest RSS with `ps aux --sort=-rss` and inspect per-process mappings with `pmap -x PID` or `smem` to separate shared from private footprints. Drill deeper: kernel and fragmentation Kernel allocations and slab usage can exhaust usable memory even when user processes look modest; run `slabtop` to spot large slab consumers, check `/proc/meminfo` for fragmented stats like "Dirty" and "SReclaimable", and on systems using containers verify cgroup memory limits under `/sys/fs/cgroup/memory/` to avoid surprise throttling. Tuning and short-term remedies When a process legitimately needs more memory, consider reducing `vm.swappiness` with `sysctl` to delay swapping, tune `vm.vfs_cache_pressure` to retain page cache, adjust OOM score with `echo <value> > /proc/<pid>/oom_score_adj`, or add swap/physical RAM as mitigation; for repeated spikes prefer cgroups or resource limits to protect other services. When containers or orchestration hide the truth Containers may enforce memory limits so the host shows healthy memory while a container hits its limit; use `docker stats`, `crictl` or orchestration tools to inspect container-level metrics and translate them back to host RSS/working set to decide whether to raise limits or optimize the containerized workloads. Final view and next steps Detecting memory bottlenecks combines quick global checks with targeted per-process inspection and occasional kernel-level analysis; practice these commands on a lab VM until the signals and fixes become second nature, then pursue formal learning to deepen the skills. Join Bits & Bytes Academy First class LINUX exam preparation. processes troubleshooting utilities infrastructure