If you run more than ten containers on a single Docker host — and in a homelab you almost certainly do — you’ve seen the problem. One container hogs CPU compiling something, Plex stutters. A misconfigured service leaks memory, the OOM killer takes down Postgres instead. Database I/O starves every other container on the same drive.
Docker doesn’t enforce limits by default. Every container can consume all available CPU, all RAM, and saturate the disk. That’s fine for development. It’s a problem for production — and your homelab is production for the services your family depends on.
This post covers how to constrain Docker containers by CPU, memory, and block I/O, with real-world examples for common homelab services.
Why Bother? The “But I Have 16 Cores” Trap
A lack of limits causes three concrete problems:
Noisy neighbors. One runaway process impacts every other container. A Python script in your Frigate container goes infinite loop — now your Home Assistant dashboard takes 30 seconds to load.
OOM roulette. The Linux OOM killer doesn’t target the leaky container first. It scores processes by a heuristic (oom_score) that rewards long-running, low-memory containers. Your Postgres or database container often dies before the actual memory hog.
Disk saturation. A database container doing a backup or reindex can push disk latency from 2 ms to 200 ms, making every other container that touches storage feel sluggish.
Resource limits turn “hoping for the best” into predictable behavior. Every container gets its fair share, and no container takes down the host.
Memory Limits — The Most Important Constraint
Memory is the resource you absolutely must cap. An unconstrained container that grows until swap fills will lock up the entire host.
--memory (Hard Limit)
The hard limit on physical memory. The container cannot allocate more than this. When it tries, the kernel reclaims memory — or kills the container process.
|
|
--memory-reservation (Soft Limit)
A soft limit — the kernel tries to keep memory usage at or below this
value, but allows bursts up to --memory. This is the right choice for
services that are usually quiet but can spike.
|
|
In plain Docker:
|
|
--memory-swap (Swap Control)
Controls how much total memory + swap the container can use. By default,
a container with --memory=512M gets unlimited swap. That means when
RAM fills, the kernel swaps — and performance tanks.
|
|
In Compose:
|
|
Pro tip: Always set --memory-swap equal to --memory to disable
swap for critical services. You want OOM kill, not swap death.
OOM Killer Priority
Docker assigns an oom_score_adj of -500 to dockerd and 0 to containers by default. You can bias which container the kernel kills first:
|
|
Warning: --oom-kill-disable without a --memory limit is
dangerous — the kernel can’t reclaim memory from the container and the
host runs out of RAM.
CPU Limits — Shares vs Quotas
Docker offers two CPU constraint mechanisms with very different behavior.
--cpus (CPU Quota — Hard Limit)
Limits the container to a fraction of a CPU core. A container with
--cpus=1.5 gets 150% of one core — equivalent to 1.5 full cores.
|
|
Under the hood this sets cpu.cfs_quota_us in cgroups. The kernel
throttles the container when it exceeds the quota. This is a hard cap
— the container cannot use more than this, even if the CPU is idle.
|
|
--cpu-shares (Relative Weight — Soft Limit)
Relative CPU priority. The default is 1024. A container with 2048 gets ~twice the CPU of a container with 1024 when there’s contention. With idle CPU, every container can burst to 100%.
|
|
Key difference: --cpus throttles hard. --cpu-shares only
matters during contention. Use --cpus for batch jobs and uploads.
Use --cpu-shares for daemons that should be fair but can burst.
--cpuset-cpus (CPU Pinning)
Pins the container to specific physical cores. Useful for:
- Real-time audio/video encoding
- Consistent cache behavior (L1/L2 stays warm)
- Avoiding NUMA cross-socket latency
|
|
In Compose:
|
|
Block I/O Limits — The Overlooked Bottleneck
CPU and memory limits are common. Block I/O limits are rare — and your homelab needs them more than you think.
--device-read-bps / --device-write-bps
Throttle read or write throughput in bytes per second:
|
|
--device-read-iops / --device-write-iops
Throttle by IOPS instead of bandwidth:
|
|
Compose does not support portable block I/O throttling through the modern
Compose Specification. The deploy.resources section can set CPU and memory,
but not per-device disk I/O limits.
Use docker run for block I/O throttling, or apply host-level controls with
systemd/cgroups for the Docker service. If you see old examples using
device_write_bps or device_read_bps in Compose files, test them on your
Compose version first — recent docker compose releases reject those keys.
Real-World Examples
Postgres Database (Memory Critical)
|
|
No swap (--memory-swap=2G via docker-compose.override if needed).
OOM kill enabled (default) — getting killed is better than swapping.
Jellyfin / Plex (CPU Intensive, Memory Hungry)
|
|
Also add --cpuset-cpus="0-3" for consistent transcode performance.
Without pinning, Linux may bounce the transcode thread between cores,
flushing cache and costing 10-20% overhead.
Frigate NVR (I/O Heavy, Memory Constrained)
|
|
The tmpfs for /tmp/cache reduces SD card / HDD writes significantly.
Transmission / qBittorrent (Disk Killer)
|
|
Throttling torrent client disk I/O is the single biggest quality-of-life improvement for a shared-storage homelab. Without it, a busy torrent client can saturate the disk for minutes.
Monitoring: What’s Actually Happening
docker stats — Quick Live View
|
|
docker stats with All Metrics
|
|
cgroup v2 Direct Inspection
On modern distros (Debian 12+, Ubuntu 22.04+, Fedora 37+):
|
|
For cgroup v2, Docker also provides docker inspect:
|
|
cgroup v2 Notes
Debian 12, Ubuntu 22.04+, and all modern kernels default to cgroup v2. Docker 20.10+ supports it natively, but there are differences:
--memory-swapbehavior changed. In cgroup v1,--memory-swap=-1meant unlimited swap. In cgroup v2, swap is 0 by default unless the kernel supports memory.swap.max.--oom-kill-disableworks differently. In cgroup v2, disabling OOM kill means the container is paused when it hits the memory limit (tasks get stuck in D state) instead of killed. Most services just freeze, which can be worse than an OOM kill.device_write_bpsneeds the right device major:minor. Usels -l /dev/sdaorstat /dev/sdato find the device numbers.
Check your cgroup version:
|
|
Putting It All Together — A Safe Default for Any Container
Here’s the template I use for every new container in my homelab:
|
|
Start conservative. Run for a week with docker stats logging. Bump
reservations based on real usage, not guesswork. Document why each
limit exists in your docker-compose.yml comments — future you will
thank present you.
When to Skip Limits
Resource limits aren’t free. Setting CPU quotas too low causes throttling and increased latency (CFS quota mechanism introduces scheduling delays). Setting memory limits too low triggers constant reclamation and swap thrashing.
Skip limits when:
- The container is the only service on the host
- The container does bursty compute that you want to finish fast (cron jobs, batch transcodes)
- You’re benchmarking or load testing
- The container is a monitoring agent that needs minimal overhead
In every other case: set limits. Your homelab will be more reliable, your other services will thank you, and you won’t wake up at 2 AM to a host that’s OOM-frozen because Frigate had a memory leak.