
Who this is for: Ops engineers or developers who run a single‑node n8n instance in Docker and need hard caps on CPU and RAM to protect other services. We cover this in detail in the n8n Performance & Scaling Guide.
Quick diagnosis
Problem: An n8n Docker container can consume unlimited host CPU and RAM, starving other workloads.
Featured‑snippet solution – Create a cgroup v2 slice, set cpu.max (or cpu.shares) and memory.max, then launch the n8n container with --cgroup-parent. The limits take effect immediately.
1. Why cgroups are the right tool for n8n isolation
Benefits vs. Docker‑native limits
| Benefit | cgroup v2 mechanism | Docker‑native equivalent |
|---|---|---|
| Hard RAM cap (no host OOM) | memory.max (hard byte limit) |
--memory (soft, can be overridden) |
| Precise CPU throttling | cpu.max (quota / period) |
cpu_quota (percentage of a single core) |
| Hierarchical control | Nested slices (/n8n/worker, /n8n/web) |
Not available natively |
Typical n8n symptoms
| Symptom | Why it happens |
|---|---|
JavaScript heap out of memory crash while other containers stay alive |
RAM exceeds the hard limit you need to enforce |
| Host becomes unresponsive during a large workflow | CPU spikes exceed the default Docker share, no hard quota |
| UI remains sluggish while a worker thread hogs cores | Lack of per‑process slicing in Docker alone |
EEFA note – In production never rely on Docker’s default
--memory-swap. cgroup v2’smemory.maxdisables swap for the slice, preventing noisy‑neighbor I/O from affecting latency‑critical n8n triggers. If you encounter any docker performance tuning resolve them before continuing with the setup.
2. Prerequisites & host preparation
| Requirement | Minimum version | Verify |
|---|---|---|
| Linux kernel with unified cgroup v2 | 5.4+ (Ubuntu 20.04, Debian 11, RHEL 8) | stat -fc %T /sys/fs/cgroup → cgroup2fs |
Docker Engine 20.10+ (supports --cgroup-parent) |
20.10 | docker version --format '{{.Server.Version}}' |
| Root or sudo to edit cgroup files | – | sudo -n true (no password prompt) |
If your distro boots with a mixed hierarchy, force v2 only:
# Add to /etc/default/grub GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=1" sudo update-grub && sudo reboot
EEFA warning – Switching to the unified hierarchy requires a maintenance window; all containers will be recreated under the new cgroup tree. If you encounter any cpu profiling resolve them before continuing with the setup.
3. Build a dedicated cgroup slice for n8n
3.1 Create the slice directory
sudo mkdir -p /sys/fs/cgroup/n8n
3.2 Set CPU limits
| Limit | File | Example | Meaning |
|---|---|---|---|
| CPU quota | cpu.max |
50000 100000 |
50 ms of CPU every 100 ms → 50 % of a single core |
| CPU weight (fallback) | cpu.weight |
200 |
Relative share (default 100) when cpu.max is absent |
Apply the CPU limits:
echo "50000 100000" | sudo tee /sys/fs/cgroup/n8n/cpu.max echo "200" | sudo tee /sys/fs/cgroup/n8n/cpu.weight
3.3 Set memory limits
| Limit | File | Example | Meaning |
|---|---|---|---|
| RAM hard cap | memory.max |
2G |
Maximum resident memory |
| Swap usage | memory.swap.max |
0 |
Disable swap for the slice |
Apply the memory limits:
echo "2G" | sudo tee /sys/fs/cgroup/n8n/memory.max echo "0" | sudo tee /sys/fs/cgroup/n8n/memory.swap.max
3.4 Verify the slice
sudo cat /sys/fs/cgroup/n8n/cpu.max sudo cat /sys/fs/cgroup/n8n/memory.max
Both commands should output the exact strings you wrote.
4. Launch n8n with the cgroup slice
4.1 Simple docker run
docker run -d \ --name n8n \ --cgroup-parent=/n8n \ -p 5678:5678 \ -e N8N_BASIC_AUTH_ACTIVE=true \ -e N8N_BASIC_AUTH_USER=admin \ -e N8N_BASIC_AUTH_PASSWORD=strongpassword \ n8nio/n8n:latest
Docker will create a sub‑cgroup at /sys/fs/cgroup/n8n/docker/<container-id> that inherits the limits you set on /n8n.
If you encounter any custom node performance resolve them before continuing with the setup.
4.2 Recommended docker‑compose.yml
version: "3.9"
services:
n8n:
image: n8nio/n8n:latest
container_name: n8n
ports:
- "5678:5678"
environment:
N8N_BASIC_AUTH_ACTIVE: "true"
N8N_BASIC_AUTH_USER: admin
N8N_BASIC_AUTH_PASSWORD: strongpassword
cgroup_parent: /n8n
Start the stack:
docker compose up -d
EEFA tip – Keep
cpu.maxas the source of truth. Docker’scpusfield is a *soft* limit that can be overridden by the host cgroup, so mismatched values may confuse monitoring tools.
5. Confirm that limits are enforced
5.1 Inspect the container’s cgroup
CONTAINER_ID=$(docker ps -qf "name=n8n") cat /sys/fs/cgroup/n8n/docker/$CONTAINER_ID/cpu.max cat /sys/fs/cgroup/n8n/docker/$CONTAINER_ID/memory.max
Both files should echo the values you defined in step 3.
5.2 Stress‑test CPU with stress-ng
- Install and run a CPU‑heavy workload inside the container:
docker exec -it n8n bash -c "\ apt-get update && apt-get install -y stress-ng && \ stress-ng --cpu 4 --timeout 30s"
- On the host, watch the cgroup’s CPU statistics:
watch -n1 "cat /sys/fs/cgroup/n8n/docker/$CONTAINER_ID/cpu.stat"
usage_usec will increase only up to the 50 ms quota per 100 ms interval you set.
5.3 Verify memory OOM behavior
docker exec -it n8n bash -c "\ python3 -c 'a = [\"x\"*1024*1024]*3000'"
The process should be killed with SIGKILL. Check the container logs for a line similar to:
memory: usage exceeds limit of 2 GiB
6. Troubleshooting common pitfalls
| Symptom | Likely cause | Fix |
|---|---|---|
CPU stays at 100 % despite cpu.max |
Docker daemon still using cgroup v1 | Enable unified hierarchy (systemd.unified_cgroup_hierarchy=1) and restart Docker |
memory.max shows max after echo |
Kernel missing CONFIG_MEMCG or distro too old |
Upgrade to kernel 5.4+ or load cgroup_memory module |
docker run errors on --cgroup-parent |
Docker < 20.10 | Update Docker Engine |
| OOM kills occur **before** 2 GiB | Host RAM exhausted, global OOM triggered | Add host swap or increase physical memory; consider memory.high for graceful throttling |
cpu.max reverts to 100000 100000 |
Wrote to the wrong path (e.g., top‑level cpu.max) |
Verify you edited /sys/fs/cgroup/n8n/cpu.max |
EEFA note – In Kubernetes you would use a
LimitRangeandResourceQuota. For a bare‑metal Docker deployment, cgroup v2 is the only way to guarantee hard caps without relying on the kubelet.
7. Advanced: Dynamic limit adjustments without restart
7.1 Increase RAM on‑the‑fly
echo "4G" | sudo tee /sys/fs/cgroup/n8n/memory.max
The running n8n process immediately sees the new ceiling.
7.2 Throttle CPU during off‑peak hours (02:00‑04:00)
# Add to root's crontab 0 2 * * * echo "25000 100000" | sudo tee /sys/fs/cgroup/n8n/cpu.max 0 4 * * * echo "50000 100000" | sudo tee /sys/fs/cgroup/n8n/cpu.max
7.3 Systemd integration (if you manage n8n as a service)
# /etc/systemd/system/n8n.service.d/override.conf [Service] CPUQuota=50% MemoryMax=2G
Then reload and restart:
sudo systemctl daemon-reload sudo systemctl restart n8n
EEFA caution – Changing
cpu.maxwhile a high‑load workflow runs can cause abrupt throttling and API timeouts. Schedule adjustments during known low‑traffic windows.
8. Checklist before you go live
- Verify the host runs unified cgroup v2 (
stat -fc %T /sys/fs/cgroup→cgroup2fs). - Create
/sys/fs/cgroup/n8nand setcpu.max,cpu.weight,memory.max,memory.swap.max. - Launch the n8n container with
--cgroup-parent=/n8n(orcgroup_parent: /n8nin compose). - Confirm inheritance by reading the container’s cgroup files.
- Run a stress test (
stress-ng/ Python memory hog) and verify throttling/kill behavior. - Document any dynamic limit changes (cron jobs, systemd overrides) in your runbook.
Conclusion
Using cgroup v2 slices gives you hard, production‑grade caps on CPU and memory for an n8n Docker container. By defining cpu.max and memory.max on a dedicated /n8n slice and launching the container with --cgroup-parent, you ensure that n8n cannot starve neighboring services, and you retain the ability to adjust limits on‑the‑fly without downtime. This approach is lightweight, kernel‑native, and works regardless of Docker’s soft‑limit semantics—making it the most reliable way to protect your host in real‑world n8n deployments.



