When 4 Signs Show n8n Becomes the Bottleneck

Step by Step Guide to solve when n8n becomes the bottleneck
Step by Step Guide to solve when n8n becomes the bottleneck


Who this is for: Engineers running n8n in production who see slow workflow executions despite fast upstream APIs. We cover this in detail in the n8n Architectural Failure Modes Guide.


Quick Diagnosis

Problem: n8n stalls while the APIs you call respond in < 200 ms.

Fast fix:

  1. Enable profiling (n8n start --profile) and grab the generated profile.json.
  2. Open it in Chrome DevTools → Performance tab; locate the longest‑running node.
  3. Apply the appropriate optimisation from the sections below (batch‑processing, DB pool tweaks, caching, etc.).

Featured‑snippet answer:
If n8n is the bottleneck, check its internal queue length, database connection pool, and node‑specific execution times. Reduce heavy loops, enable batching, and scale the DB or worker processes before blaming external APIs.

*In production, the UI often shows a long wait even though API logs are sub‑200 ms.*


1. Pinpoint the Bottleneck with Built‑In Metrics

If you encounter any when n8n is the wrong tool resolve them before continuing with the setup.

What you see: n8n’s UI and a few CLI tools give raw numbers that tell you whether the issue lies inside n8n or outside. Keep this table handy while debugging.

Metric Healthy range Red‑flag
Execution duration (UI → Execution → Details) < 2 s for simple flows > 5 s (simple) / > 30 s (complex)
Node runtime (from profile.json) < 200 ms per node > 1 s per node
Queue length (Settings → Queue) 0‑2 pending jobs > 10 pending jobs
DB query time (DB logs or EXPLAIN ANALYZE) < 50 ms per query > 200 ms per query
CPU / Memory (docker stats / top) < 70 % CPU, < 75 % RAM > 90 % CPU or RAM for > 5 min

Production tip: Enable the Prometheus exporter (N8N_METRICS=true) and visualise the data in Grafana. Guess‑work scaling rarely solves the real issue.

How to Generate a Profile?

Stop the container, then restart it with profiling turned on. The profile ends up at ~/.n8n/profile.json.

# Stop the current instance
docker stop n8n
# Restart with profiling (writes to /root/.n8n)
docker run -d \
  -e N8N_BASIC_AUTH_ACTIVE=true \
  -e N8N_BASIC_AUTH_USER=admin \
  -e N8N_BASIC_AUTH_PASSWORD=strongpwd \
  -e N8N_PROFILE=true \
  -p 5678:5678 \
  --name n8n \
  n8nio/n8n

Run the slow workflow again, pull profile.json from the container’s volume, and open it in Chrome DevTools → Performance tab.


2. Common n8n‑Specific Culprits

If you encounter any n8n vs custom microservices failure modes resolve them before continuing with the setup.
Typical reasons a workflow drags and a concrete fix for each.

Culprit Symptom Fix Production tip
Heavy “Loop” nodes (e.g., SplitInBatches → HTTP Request) Time grows linearly with input size 1️⃣ Swap in a Batch node, set size 50‑200
2️⃣ Use a Function node to chunk data before the loop
Keep batch size ≤ 200 to avoid DB transaction overload.
Large JSON payloads stored in the DB DB read/write > 300 ms, high memory usage 1️⃣ Turn on Binary Data storage (N8N_BINARY_DATA_MODE=filesystem)
2️⃣ Store only IDs in the DB, fetch full payload on demand
Offload > 5 MB files to an object store (S3, MinIO).
Insufficient DB connection pool “Too many connections” errors, queue backs up 1️⃣ Set POSTGRES_POOL_MAX=20 (or MySQL equivalent) in the env
2️⃣ Restart n8n
Align pool size with N8N_WORKER_COUNT.
Synchronous “Wait” nodes Unnecessary idle time (e.g., Wait → 30 s) Replace with **Cron**‑based triggers or drop if not needed Avoid any node that blocks the main thread in high‑throughput setups.
Missing cache for static API calls Repeated GETs → throttling, latency 1️⃣ Insert a **Cache** node before the HTTP Request
2️⃣ Set a TTL (e.g., 300 s)
Use Redis (N8N_CACHE_BACKEND=redis) for distributed caching across multiple n8n instances.

*Most teams hit the loop‑node issue after a few weeks of adding more data, not on day one.*


3. Scaling the Execution Engine

If you encounter any why more workers dont scale n8n resolve them before continuing with the setup.

3.1. Horizontal Scaling with Worker Processes

More workers let n8n run several workflows in parallel. Add the following to docker‑compose.yml.

services:
  n8n:
    image: n8nio/n8n
    environment:
      - N8N_WORKER_COUNT=4   # spawn 4 workers
      - EXECUTIONS_MODE=queue
      - N8N_QUEUE_BULL_REDIS_URL=redis://redis:6379
    volumes:
      - ./n8n:/root/.n8n
    ports:
      - "5678:5678"
Setting Effect
N8N_WORKER_COUNT Number of parallel workflow executions
EXECUTIONS_MODE=queue Jobs go to a Redis‑backed queue (requires the Redis URL above)
N8N_QUEUE_BULL_REDIS_URL Central Redis instance that coordinates multiple n8n containers

EEFA note: With more than one host, store workflow data in a shared PostgreSQL instance and use Redis for the queue. The default in‑memory queue will lose jobs on container restart.

*At this point, bumping the worker count is usually faster than hunting for a single slow node.*

3.2. Database Tuning for PostgreSQL

Deploy a connection‑pooler such as pgbouncer and tune PostgreSQL’s memory settings.

# pgbouncer.ini – connection pooler
[databases]
n8n = host=postgres port=5432 dbname=n8n

[pgbouncer]
pool_mode = transaction
max_client_conn = 100
default_pool_size = 20
Parameter Recommended value Why it matters
shared_buffers 25 % of RAM Keeps workflow metadata in RAM
work_mem 64 MB per connection Allows complex JSONB ops without spilling
max_connections ≥ N8N_WORKER_COUNT × 2 Prevents “too many connections” errors during spikes

4. Caching & Rate‑Limiting Inside n8n

4.1. Redis‑Backed Cache Node (Community)

Add two env vars so the built‑in Cache node talks to Redis.

environment:
  - N8N_CACHE_BACKEND=redis
  - N8N_CACHE_REDIS_URL=redis://redis:6379

Typical workflow pattern

  1. Cache → Get (key = {{ $json["id"] }})
  2. If exists → return cached data
  3. Else → HTTP Request → Cache → Set (TTL = 300 s) → continue

4.2. Built‑In Rate Limiter (v1.1+)

Stop workers from hammering an upstream API.

{
  "name": "RateLimiter",
  "type": "n8n-nodes-base.rateLimiter",
  "parameters": {
    "limit": 20,
    "interval": "minute"
  }
}

*Why it matters:* When an API spikes, the limiter prevents n8n from queuing thousands of calls, which would otherwise create a self‑inflicted bottleneck. Pair it with a **Retry** node that uses exponential back‑off to handle 429 responses gracefully.


5. Real‑World Troubleshooting Checklist

Check How to verify? Remedy
Queue length stays < 5 docker exec n8n n8n queue:list Increase N8N_WORKER_COUNT or switch to a Redis queue
DB connection pool not exhausted SELECT * FROM pg_stat_activity; Raise POSTGRES_POOL_MAX
Node execution time < 500 ms Profile chart in Chrome Batch, cache, or refactor heavy nodes
CPU usage < 80 % sustained top or Grafana CPU panel Add workers or offload heavy logic to a microservice
Memory leaks (RSS climbs) docker stats over 24 h Store binaries on filesystem, not DB
Redis latency < 5 ms redis-cli ping && redis-cli latency latest Scale Redis vertically or use a clustered setup

EEFA warning: Never disable the queue (EXECUTIONS_MODE=process) as a shortcut. Removing back‑pressure leads to uncontrolled memory growth and eventual OOM kills.


6. When to Escalate to the API Team?

Situation Likely n8n cause Action
API latency spikes **only** during high n8n load n8n concurrency → hits API rate limits Add **RateLimiter** and **Retry** nodes
API returns 5xx errors **even with low n8n traffic** External service fault Open a ticket with the API provider
API latency stable, but n8n execution time grows Internal processing (loops, DB) Apply fixes from Sections 2‑5
Mixed results, intermittent Could be both sides Enable end‑to‑end tracing (N8N_LOG_LEVEL=debug) and correlate timestamps

Bottom Line

If n8n is the bottleneck, the root cause is almost always **inefficient node design, insufficient DB/worker resources, or missing caching**. Use the profiling steps above, apply the targeted fixes, and keep an eye on the key metrics. Only after those steps should you investigate the external API for latency problems.

Leave a Comment

Your email address will not be published. Required fields are marked *