Who this is for: Platform engineers and DevOps teams that run n8n in production and need reliable, linear scaling across multiple nodes. We cover this in detail in the n8n Performance & Scaling Guide.
Quick Diagnosis
Problem: n8n’s default SQLite‑backed execution runs in a single process, creating a bottleneck when workflows grow.
Featured‑Snippet Solution: Enable the Redis Queue integration, run n8n workers in stateless mode, and add as many worker containers as needed. The queue automatically balances workflow executions, delivering true horizontal scaling.
1. Why Redis? – Core Scaling Principle
If you encounter any kubernetes scaling strategies resolve them before continuing with the setup.
| Feature | Redis Queue | Built‑in SQLite / Single‑process |
|---|---|---|
| Stateless workers | ✔︎ Workers read jobs from Redis, no local state required | ✘ Workers keep state locally |
| Horizontal elasticity | ✔︎ Add/remove worker pods instantly | ✘ Scaling limited to one process |
| Fail‑over safety | ✔︎ Jobs persist in Redis, survive worker crash | ✘ In‑flight jobs lost on crash |
| Throughput | ✔︎ Multi‑core, multi‑node parallelism | ✘ Single‑core bottleneck |
EEFA note – In production, always run Redis in cluster mode (minimum 3 masters) to avoid a single point of failure and to benefit from automatic sharding.
2. Prerequisites & Environment Setup
Micro‑summary: Verify versions, provision a Redis cluster, and secure credentials before configuring n8n.
| Item | Minimum Requirement | Recommended |
|---|---|---|
| n8n version | `0.224.0` or newer | Latest stable |
| Redis | 5.0+ (supports streams) | Redis 7.x in cluster mode |
| Node.js | 18.x LTS | 20.x LTS |
| Docker / Kubernetes | Docker 20.10+ | Kubernetes 1.26+ with HPA |
Checklist before you start
- Pull the latest
n8nDocker image (docker pull n8nio/n8n:latest). - Deploy a Redis cluster (e.g., via the official Helm chart
redis-cluster). - Create a dedicated Redis user with ACL
+@alllimited to then8n_queuekeyspace. - Open ports
6379(or TLS6380) only to the n8n network. - Store Redis credentials in a secret manager (Vault, AWS Secrets Manager, etc.).
- If you encounter any autoscaling aws ecs resolve them before continuing with the setup.
3. Enabling Redis Queue in n8n
3.1. Core Environment Variables
Set the following variables in every n8n container (main instance and workers). They switch n8n to queue mode and point it at Redis.
If you encounter any load balancer setup resolve them before continuing with the setup.
EXECUTIONS_PROCESS=queue # Activate queue mode EXECUTIONS_MODE=queue # Alias for backward compatibility QUEUE_BULL_REDIS_HOST=redis-cluster.default.svc.cluster.local QUEUE_BULL_REDIS_PORT=6380 QUEUE_BULL_REDIS_TLS=true
# Authentication & tuning
QUEUE_BULL_REDIS_PASSWORD=${REDIS_PASSWORD}
QUEUE_BULL_REDIS_DB=0
QUEUE_BULL_CONCURRENCY=5 # Jobs processed concurrently per container
EEFA warning – Never expose REDIS_PASSWORD as plain text in Dockerfiles. Use Docker secrets or Kubernetes envFrom with a secret.
3.2. Docker‑Compose Example
Service definitions – each block stays under five lines for readability.
version: "3.8"
services:
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
environment:
- EXECUTIONS_PROCESS=queue
ports:
- "5678:5678"
depends_on:
- redis
n8n-worker:
image: n8nio/n8n:latest
command: ["n8n", "worker"]
environment:
- EXECUTIONS_PROCESS=queue
env:
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- QUEUE_BULL_REDIS_PASSWORD=${REDIS_PASSWORD}
- QUEUE_BULL_CONCURRENCY=5
depends_on:
- redis
redis:
image: redis:7-alpine
command: ["redis-server", "--appendonly", "yes"]
volumes:
- redis-data:/data
ports:
- "6379:6379"
volumes: redis-data:
EEFA tip – For TLS‑encrypted Redis, replace the plain redis:7-alpine service with a side‑car that terminates TLS, or use the rediss:// protocol in the env vars.
4. Deploying Stateless Workers at Scale
Micro‑summary: Use a Kubernetes Deployment for workers, then let the Horizontal Pod Autoscaler (HPA) adjust replica count based on load.
4.1. Kubernetes Deployment (Recommended)
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n-worker
spec:
replicas: 3 # Start with 3 workers; HPA will adjust
selector:
matchLabels:
app: n8n-worker
template:
metadata:
labels:
app: n8n-worker
spec:
containers:
- name: n8n
image: n8nio/n8n:latest
args: ["n8n", "worker"]
envFrom:
- secretRef:
name: n8n-redis-secret
env:
- name: EXECUTIONS_PROCESS
value: "queue"
- name: QUEUE_BULL_CONCURRENCY
value: "10"
resources:
limits:
cpu: "500m"
memory: "512Mi"
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: n8n-worker-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: n8n-worker
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Key points
- Statelessness – No volume mounts; pods can be terminated without data loss.
- HPA – Autoscaling based on CPU (and optionally custom queue‑length metrics) ensures you only pay for capacity you need.
- Pod‑Disruption‑Budget – Set
maxUnavailable: 1to keep at least one worker alive during upgrades.
EEFA note – When using Redis Cluster, ensure the redis-cluster service resolves to all master nodes; otherwise workers may receive MOVED errors. Add REDIS_CLUSTER_MODE=yes to the environment if you use the official Helm chart.
4.2. Scaling Strategy Checklist
| Steps | Action |
|---|---|
| 1 | Deploy minimum 2 workers to guarantee redundancy. |
| 2 | Set QUEUE_BULL_CONCURRENCY based on CPU cores (≈ 1 job per 0.1 CPU). |
| 3 | Enable HorizontalPodAutoscaler with CPU & custom metric (queue length). |
| 4 | Monitor Redis latency; keep p99 < 5 ms. |
| 5 | Use Redis‑Insight or redis-cli INFO to watch instantaneous_ops_per_sec. |
| 6 | Configure Redis persistence (appendonly yes) for crash recovery. |
| 7 | Periodically rotate Redis credentials via secret manager. |
5. Monitoring & Observability
Micro‑summary: Export Prometheus metrics from n8n and build a Grafana dashboard that tracks queue health and Redis latency.
5.1. n8n Built‑in Metrics
Expose Prometheus metrics with the following env vars (add to every n8n container):
- name: METRICS_ENABLED value: "true" - name: METRICS_PORT value: "5679"
Scrape http://<n8n-service>:5679/metrics. Key series:
n8n_queue_jobs_total– total jobs enqueued.n8n_queue_jobs_failed– failed executions.n8n_worker_active_jobs– jobs currently being processed.
5.2. Redis Queue Health Dashboard (Grafana JSON)
{
"title": "n8n Redis Queue",
"panels": [
{
"type": "graph",
"title": "Queue Length (jobs waiting)",
"targets": [{ "expr": "redis_queue_length{job=\"n8n\"}" }]
},
{
"type": "graph",
"title": "Worker Concurrency Utilization",
"targets": [{ "expr": "sum(rate(n8n_worker_active_jobs[1m])) / sum(n8n_worker_capacity)" }]
},
{
"type": "singlestat",
"title": "Redis P99 Latency (ms)",
"targets": [{ "expr": "histogram_quantile(0.99, sum(rate(redis_latency_seconds_bucket[5m])) by (le)) * 1000" }]
}
]
}
EEFA tip – Set alerts for **queue length > 500** or **p99 latency > 10 ms**; these thresholds usually indicate you need more workers or a Redis scaling event.
6. Troubleshooting Common Issues
Micro‑summary: Identify the root cause quickly with the table below, then apply the concise fix.
| Symptom | Likely Cause | Fix |
|---|---|---|
| Workers keep restarting (CrashLoopBackOff) | Missing REDIS_PASSWORD secret or ACL mismatch |
Verify secret name; run redis-cli ACL GETUSER <user> |
Jobs stuck in queued forever |
Redis cluster MOVED error not handled |
Add REDIS_CLUSTER_MODE=yes env var; ensure client library supports cluster |
| Duplicate workflow executions | Multiple workers reading the same job due to mis‑configured consumer group | Set QUEUE_BULL_REDIS_GROUP identically across all workers |
| High latency > 20 ms | Network partition between k8s nodes and Redis | Deploy Redis in a node‑local service or a dedicated VPC subnet |
| Memory bloat in n8n container | Worker concurrency too high for container limits | Reduce QUEUE_BULL_CONCURRENCY or increase container memory limit |
Quick Fix for “MOVED” Errors (Featured‑Snippet style)
# Add this env var to every worker export REDIS_CLUSTER_MODE=yes
Or in Docker‑Compose:
- REDIS_CLUSTER_MODE=yes
The flag tells BullMQ (the queue library n8n uses) to follow Redis cluster redirects automatically.
7. Conclusion
To scale n8n horizontally, enable the Redis Queue by setting
EXECUTIONS_PROCESS=queueand configuring the Redis connection (QUEUE_BULL_REDIS_HOST,PORT,PASSWORD). Deploy stateless worker containers (Docker or Kubernetes) that runn8n workerand setQUEUE_BULL_CONCURRENCYbased on CPU cores. Use a Redis cluster for high availability, expose Prometheus metrics (METRICS_ENABLED=true) and autoscale workers with a HorizontalPodAutoscaler. Monitor queue length and Redis latency; add more workers or increase Redis shards when the queue exceeds 500 jobs or latency > 10 ms.



