
Who this is for: Developers and DevOps engineers who maintain n8n instances and need to diagnose CPU‑bound performance problems in production or staging environments. We cover this in detail in the n8n Performance & Scaling Guide.
Quick Diagnosis
Problem: An n8n instance feels sluggish and you suspect specific nodes or workflows are hogging CPU.
One‑line fix: Run a Node.js profiler (e.g., clinic flame, 0x, or V8’s built‑in --prof) against the n8n process, generate a flame‑graph, and zero in on the hot functions.
Featured‑snippet steps
- Stop the running n8n service.
- Start n8n with a profiling flag.
- Trigger the slow workflow (or run a load test).
- Stop n8n and generate a flame‑graph.
- Open the SVG/HTML graph, locate the longest stack traces, and optimise the offending node or custom code.
1. Prerequisites
If you encounter any docker performance tuning resolve them before continuing with the setup.
| Requirement | Why it matters |
|---|---|
| n8n ≥ 0.230 | Profiling hooks are stable from this release onward |
| Node.js ≥ 18 | Modern V8 provides richer --prof output |
| Profiling tool (choose one) | Generates flame‑graphs or CPU‑time breakdowns |
| Host access (SSH / terminal) | Needed to start n8n with custom flags |
| Docker host (optional) | Required if n8n runs inside a container |
EEFA note: Never expose
--inspectto the public internet. Bind the inspector to127.0.0.1or tunnel through a VPN.
2. Starting n8n with a Profiler
2.1 Bare‑metal (direct Node execution)
Stop any existing service and launch n8n with V8’s CPU profiler.
pm2 stop n8n # or: systemctl stop n8n
node --prof \
--max-old-space-size=4096 \
./node_modules/.bin/n8n &
The --prof flag creates isolate-*.log files in the current directory. The memory flag prevents OOM crashes during long runs.
2.2 Using clinic flame (recommended for visual output)
Clinic handles process spawning, data collection, and graph generation automatically.
npm i -g clinic clinic flame -- node ./node_modules/.bin/n8n
Press Ctrl+C when the workload is done; flamegraph.html appears in the cwd.
2.3 Profiling a Docker‑based n8n
- Enter the container with privileged rights (required for V8 profiling).
docker ps | grep n8n # get the container ID docker exec -it --privileged <container-id> sh
- Install the profiler inside the container and run it.
npm i -g clinic clinic flame -- node /usr/local/bin/n8n
EEFA warning:
--privilegedgrants full host kernel access. Use only on isolated staging containers. If you encounter any resource limiting with cgroups resolve them before continuing with the setup.
3. Capturing a Representative Workload
| Approach | When to use | How to trigger |
|---|---|---|
| Manual execution | Quick sanity check, low traffic | Click Execute Workflow in the UI |
| Automated load test | High‑traffic scenarios | k6 run -d 30s -u 50 script.js (target the /webhook endpoint) |
| Scheduled cron job | Periodic background jobs | Let the native n8n cron trigger run naturally |
Tip: Keep profiling sessions ≤ 60 seconds. Longer runs produce massive isolate files and may skew results due to JIT warm‑up.
4. Converting Logs to Flame‑Graphs
4.1 V8’s --prof-process pipeline
- Find the latest isolate file.
ls -1t isolate-*.log | head -n1
- Convert to a readable text report.
node --prof-process isolate-00000-v8.log > processed.txt
- Render an SVG flame‑graph (requires
gprof2dotand Graphviz).
cat processed.txt | gprof2dot -s | dot -Tsvg -o cpu.svg
Open cpu.svg in a browser; the widest bars indicate the highest CPU consumption.
4.2 Clinic flame (auto‑generated)
Run the same command from §2.2; after workload completion, flamegraph.html is ready—no extra steps needed.
| Tool | Output | Best for |
|---|---|---|
| –prof-process + gprof2dot | SVG flame‑graph | Full control, custom post‑processing |
| clinic flame | Interactive HTML | Fast turnaround, beginner‑friendly |
| 0x | Chrome‑compatible flame‑graph | When you prefer Chrome DevTools UI |
5. Interpreting the Flame‑Graph
- Find the longest horizontal bar – this is the hottest stack trace.
- Read bottom‑to‑top – the bottom function is where execution started (often
runWorkflow). - Spot custom code – look for paths like
functions/or your own npm packages. - Identify built‑in nodes – names such as
ExecuteCommand,HttpRequest, orCodeappear asnodeExecutewrappers. - If you encounter any custom node performance resolve them before continuing with the setup.
Example textual snippet
runWorkflow → executeNode → executeCodeNode → eval (my-custom-function.js:23)
Action: The eval call at line 23 in my‑custom‑function.js is the hotspot. Refactor the loop, cache results, or move the logic to an external micro‑service.
6. Optimising Identified Hot Nodes
| Issue | Typical cause | Quick fix | Production‑grade fix |
|---|---|---|---|
| CPU‑bound JS loop | for over 10⁶ items |
Switch to Array.reduce with early exit |
Stream data via a Node.js Transform or offload to a worker thread (worker_threads) |
| Repeated external API calls | HttpRequest inside a loop |
Batch with Promise.allSettled |
Add a caching layer (Redis) and respect rate limits |
| Heavy file parsing | ReadBinaryFile + ParseCSV |
Limit rows via skipRows |
Pre‑process files with an ETL service (e.g., AWS Lambda) |
| Sync I/O in custom code | fs.readFileSync inside a loop |
Use async fs.promises.readFile |
Switch to streams (fs.createReadStream) to avoid blocking the event loop |
EEFA tip: Re‑run the profiler after each change. Hotspots shift; iterative profiling prevents regression.
7. Common Profiling Pitfalls
| Symptom | Root cause | Fix |
|---|---|---|
No isolate-*.log generated |
Used --inspect instead of --prof |
Launch with node --prof … |
| Flame‑graph shows only V8 internals | Workload too short; JIT never warmed up | Run the workload for ≥ 30 s or increase iterations |
Huge isolate files (> 500 MB) |
Profiling over an extended period or high concurrency | Limit concurrency, profile a single workflow only |
| Profiler crashes in Docker | Missing perf binary (required by clinic) |
apt-get update && apt-get install -y linux-tools-common linux-tools-generic **or** use clinic with --collect-only |
8. Automating CPU Profiling in CI/CD
8.1 Workflow definition (GitHub Actions)
name: CPU profiling (staging)
on:
workflow_dispatch:
inputs:
workflowId:
description: "n8n workflow ID to profile"
required: true
8.2 Job steps – install and start profiling
jobs:
profile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install n8n & clinic
run: |
npm ci
npm i -g clinic
8.3 Run n8n with profiling, trigger the workflow, then stop
- name: Start n8n with profiling
run: |
clinic flame -- node ./node_modules/.bin/n8n &
N8N_PID=$!
sleep 5 # give n8n time to start
curl -X POST "https://staging.example.com/webhook/${{ github.event.inputs.workflowId }}"
kill $N8N_PID
8.4 Upload the resulting flame‑graph as an artifact
- name: Upload flamegraph
uses: actions/upload-artifact@v3
with:
name: flamegraph
path: flamegraph.html
Result: Each manual dispatch creates a fresh flame‑graph stored as a CI artifact, enabling continuous performance regression detection.
Conclusion
CPU profiling in n8n follows a repeatable loop:
- Start the process with a profiler (
--proforclinic flame). - Exercise the suspect workflow under realistic load.
- Generate a flame‑graph and locate the widest stack trace.
- Optimise the offending node or custom code (loop refactor, batching, caching, async I/O, or worker off‑loading).
- Re‑profile to confirm the improvement.
By integrating profiling into CI/CD, you catch regressions early and keep your n8n automations performant in production.



