
Who this is for: DevOps engineers or platform engineers who need to run n8n in a horizontally‑scaled Docker/Kubernetes environment with deterministic execution. Keep workflow state consistent across scaled instances. We cover this in detail in the n8n Performance & Scaling Guide.
Sync Setup
| Goal | Minimal Config | Required Services | Key Env Vars |
|---|---|---|---|
| Consistent workflow execution across 3+ n8n nodes | Use PostgreSQL for DB + Redis (or RabbitMQ) for queue | PostgreSQL 13+, Redis 6+ (or RabbitMQ 3.8+) | DB_TYPE=postgresdb DB_POSTGRESDB_HOST=postgres EXECUTIONS_PROCESS=queue QUEUE_BULL_REDIS_HOST=redis |
- Deploy a shared PostgreSQL instance.
- Deploy a Redis (or RabbitMQ) broker.
- Apply the env‑vars above on every n8n container.
- Restart all nodes – they now share workflow definitions and pull jobs from a single queue, guaranteeing state consistency.
Quick Diagnosis: What Problem Does This Page Solve?
If you encounter any database optimization resolve them before continuing with the setup.
When you horizontally scale n8n (Docker Swarm, Kubernetes, or multiple VMs), each node keeps its own execution cache. Without coordination the same workflow can fire on several nodes simultaneously, causing duplicate actions, race conditions, and data drift.
Solution – Centralize workflow definitions, execution state, and queue handling using a shared PostgreSQL database and a distributed job broker (Redis or RabbitMQ). The guide below shows the exact configuration, minimal code snippets, and troubleshooting steps to achieve deterministic, conflict‑free execution across any number of n8n instances.
1. Execution Model Overview
1.1 Workflow Storage
| Mode | Storage Backend | Note |
|---|---|---|
| Single‑instance | SQLite (local file) | Not shareable across containers |
| Multi‑instance | PostgreSQL / MySQL / MariaDB | Network‑accessible DB required |
1.2 Execution Queue
| Mode | Queue Implementation | Benefit |
|---|---|---|
| Single‑instance | In‑process (synchronous) | Simple, but no deduplication |
| Multi‑instance | Bull (Redis) or RabbitMQ | Guarantees exactly‑once processing |
1.3 Static Data & Credentials
| Storage | Location | Recommendation |
|---|---|---|
Local files (/root/.n8n) |
Container FS | Do not use in scaled setups |
DB‑backed tables (staticData) |
Shared DB | Default when DB_TYPE is external |
When EXECUTIONS_PROCESS=queue is set, n8n publishes every trigger event to the broker. Any node can pull the job, ensuring a single execution per trigger. If you encounter any database migration strategies resolve them before continuing with the setup.
2. Choosing a Shared Persistence Layer
2.1 Database Options
| DB Engine | Pros | Cons |
|---|---|---|
| PostgreSQL | Strong ACID, JSONB, SELECT FOR UPDATE locking |
Slightly heavier |
| MySQL / MariaDB | Familiar to many DBAs | Limited JSON querying (MySQL 5.7+) |
| SQLite (shared NFS) | Zero‑config | File‑locking issues under concurrency – not recommended |
EEFA – PostgreSQL’s FOR UPDATE SKIP LOCKED prevents workers from stealing each other’s execution rows, eliminating duplicate runs.
2.2 Queue Broker Options
| Broker | Typical Latency | Scaling Ease | Persistence |
|---|---|---|---|
| Redis (Bull) | < 5 ms | Horizontal scaling trivial | In‑memory, optional RDB/AOF |
| RabbitMQ | ~ 10 ms | Supports complex routing | Durable queues |
| Kafka | ~ 1 ms (high‑throughput) | Very large clusters | Log‑based persistence – overkill for most n8n workloads |
EEFA – Reuse an existing Redis cache if available; otherwise RabbitMQ gives built‑in dead‑letter handling for failed jobs. If you encounter any cache layer implementation resolve them before continuing with the setup.
3. Configuring Queue Mode with Redis
3.1 Docker‑Compose: Service Definitions
PostgreSQL service (5 lines)
postgres:
image: postgres:13
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: secret
Redis service (5 lines)
redis: image: redis:6-alpine command: ["redis-server", "--appendonly", "yes"]
n8n service (5 lines – core env vars)
n8n:
image: n8nio/n8n:latest
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- EXECUTIONS_PROCESS=queue
- QUEUE_BULL_REDIS_HOST=redis
n8n service – optional auth & ports (5 lines)
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=supersecret
ports:
- "5678:5678"
EEFA – appendonly yes in Redis ensures queued jobs survive node restarts.
3.2 Kubernetes Manifest – Deployment & Service
Deployment (first 5 lines)
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n
spec:
replicas: 3
selector:
matchLabels:
app: n8n
Container spec with env (next 5 lines)
template:
metadata:
labels:
app: n8n
spec:
containers:
- name: n8n
image: n8nio/n8n:latest
env:
- name: DB_TYPE
value: "postgresdb"
- name: DB_POSTGRESDB_HOST
value: "postgres-svc"
- name: EXECUTIONS_PROCESS
value: "queue"
- name: QUEUE_BULL_REDIS_HOST
value: "redis-svc"
Service (5 lines)
---
apiVersion: v1
kind: Service
metadata:
name: n8n-svc
spec:
selector:
app: n8n
ports:
- port: 80
targetPort: 5678
type: LoadBalancer
EEFA – Use a headless Service for Redis if you need stable DNS per pod. Add podAntiAffinity to avoid co‑locating all n8n pods on a single node.
4. Synchronizing Static Data, Credentials & Secrets
4.1 Migration Checklist
| Steps | Action |
|---|---|
| 1 | Ensure DB_TYPE is not sqlite. |
| 2 | Export local credentials: n8n export:credentials --output ./creds.json. |
| 3 | Import them: n8n import:credentials --input ./creds.json. |
| 4 | Delete the local .n8n folder on each node (or mount it read‑only). |
| 5 | Restart all n8n instances. |
EEFA – Credentials are encrypted with N8N_ENCRYPTION_KEY. The same key must be set on every pod; otherwise imported credentials become unreadable.
4.2 Setting a Shared Encryption Key (4 lines)
export N8N_ENCRYPTION_KEY=$(openssl rand -base64 32)
docker run -e N8N_ENCRYPTION_KEY=$N8N_ENCRYPTION_KEY \
-e DB_TYPE=postgresdb \
-e DB_POSTGRESDB_HOST=postgres \
-e EXECUTIONS_PROCESS=queue \
-e QUEUE_BULL_REDIS_HOST=redis \
n8nio/n8n
Copy the printed key into the environment of all n8n instances.
5. Health Checks, Conflict Resolution & Idempotency
5.1 Built‑in Health Endpoint (4 lines)
livenessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 30
periodSeconds: 10
Configure your load balancer to route traffic only to pods that report healthy.
5.2 Idempotent Trigger Patterns
| Technique | Implementation |
|---|---|
| Idempotent nodes | Store a unique request ID in workflow static data (Set node) and skip processing if it already exists. |
| Webhook signature verification | Validate HMAC header before any business logic. |
| Debounce logic | Add a Delay node with `Wait Until` to coalesce rapid repeats. |
EEFA – For cron triggers, set cronTimezone to a fixed zone (e.g., UTC) on all nodes to avoid overlapping runs.
5.3 Monitoring Queue Backlog (2 lines)
redis-cli -h redis llen bull:default # pending jobs
If the length exceeds 500, consider scaling n8n replicas or increasing Redis maxmemory.
6. Troubleshooting Common Sync Issues
| Symptom | Likely Cause | Fix |
|---|---|---|
| Duplicate API calls | Two nodes processing the same queue item | Verify a single broker address (QUEUE_BULL_REDIS_HOST) and that EXECUTIONS_PROCESS=queue is set on all nodes. |
| Workflow not found after scaling | Workers pointing to different DB instances | Use one connection string (DB_POSTGRESDB_HOST) and confirm DNS resolves to the same service. |
| Credentials decryption error | Mismatched N8N_ENCRYPTION_KEY across pods |
Export the key from a working node and propagate it uniformly. |
| Stale static data after pod restart | Persistent volume not shared | Switch to DB‑backed static data (default with external DB) or mount a ReadWriteMany PVC (e.g., NFS) across pods. |
| Redis connection timeout | Network policy blocks port 6379 | Add an allow rule for the Redis service in your cluster’s network policy. |
6.1 Diagnostic Script: Part 1 (4 lines)
#!/usr/bin/env bash echo "=== DB connectivity ===" psql -h $DB_POSTGRESDB_HOST -U $DB_POSTGRESDB_USER -d $DB_POSTGRESDB_DATABASE -c "\dt"
6.2 Diagnostic Script: Part 2 (4 lines)
echo "=== Redis queue length ===" redis-cli -h $QUEUE_BULL_REDIS_HOST llen bull:default
6.3 Diagnostic Script: Part 3 (4 lines)
echo "=== Encryption key consistency ===" grep N8N_ENCRYPTION_KEY /proc/$(pgrep -f n8n)/environ | cut -d= -f2 | sort | uniq -c
Run the three parts on each node; outputs should be identical.
Conclusion
By centralizing workflow definitions in PostgreSQL and delegating trigger handling to a single Redis (or RabbitMQ) queue, every n8n instance works from the same source of truth and processes each event exactly once. Consistent N8N_ENCRYPTION_KEY, shared static‑data storage, and health‑checked pods complete the production‑ready recipe. Apply the snippets above, verify the diagnostics, and your scaled n8n fleet will run deterministically, free from duplicate actions and race conditions.



