Who this is for: DevOps engineers or system administrators who need to run multiple n8n instances behind a reliable reverse‑proxy load balancer. We cover this in detail in the n8n Performance & Scaling Guide.
Quick Diagnosis
Problem: A single n8n instance becomes a bottleneck under concurrent workflows, causing time‑outs and a degraded UI.
Solution: Deploy a reverse‑proxy (Nginx or HAProxy) in front of ≥ 2 n8n containers/VMs, enable optional sticky sessions, and configure health checks. The steps below answer the featured‑snippet query “how to configure a load balancer for n8n”.
1️⃣ Deploy ≥2 n8n instances (Docker, PM2, etc.) 2️⃣ Install Nginx or HAProxy on a dedicated host 3️⃣ Add upstream servers pointing to each n8n instance 4️⃣ Enable health checks & session persistence (optional) 5️⃣ Reload the balancer and verify with curl or the UI
1. Prerequisites & Environment Checklist
| Item | Details |
|---|---|
| n8n ≥ 1.0 on separate hosts | All instances must expose the same WEBHOOK_URL base (e.g., https://n8n‑01.example.com). |
| Domain & DNS | Pointing to the balancer IP – A single A record or CNAME provides a stable public endpoint. |
| TLS certificate | On the balancer – Terminating TLS off‑loads crypto work from n8n workers. |
| Firewall rules | Allowing inbound 80/443 to balancer, outbound 5678 to each instance – Prevents accidental exposure of internal n8n ports. |
| Health‑check endpoint | (/healthz or /api/v1/ping) reachable on each node – Required for automatic removal of failed nodes. |
EEFA note – Never expose the default n8n port (
5678) directly to the internet. Use the balancer as the sole public entry point and lock down internal ports with firewall rules. If you encounter any kubernetes scaling strategies resolve them before continuing with the setup.
2. Nginx Load‑Balancing Configuration
2.1 Install Nginx (Ubuntu/Debian)
sudo apt-get update sudo apt-get install -y nginx sudo systemctl enable --now nginx
Installs and starts Nginx as a system service. If you encounter any autoscaling aws ecs resolve them before continuing with the setup.
2.2 Define the upstream pool
Create /etc/nginx/conf.d/n8n_upstream.conf with the following snippet:
upstream n8n_backend {
server 10.0.1.21:5678 max_fails=3 fail_timeout=30s;
server 10.0.1.22:5678 max_fails=3 fail_timeout=30s;
keepalive 32;
}
Lists each n8n node; keepalive reduces TCP handshakes.
2.3 HTTP → HTTPS redirect
server {
listen 80;
server_name n8n.example.com;
return 301 https://$host$request_uri;
}
2.4 TLS termination and proxy core
Create the main HTTPS server block (first part – certificates and basic settings):
server {
listen 443 ssl http2;
server_name n8n.example.com;
ssl_certificate /etc/letsencrypt/live/n8n.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/n8n.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
}
Handles TLS termination; replace paths with your cert files.
2.5 Proxy the UI and API
location / {
proxy_pass http://n8n_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
2.6 WebSocket support
n8n’s UI uses WebSockets; enable the required headers:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
2.7 Timeouts for long‑running workflows
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
2.8 Optional health‑check endpoint
location /healthz {
proxy_pass http://n8n_backend/healthz;
}
}
2.9 Sticky sessions (IP‑hash) – when needed
If the UI requires session persistence, replace the upstream definition with:
upstream n8n_backend {
ip_hash;
server 10.0.1.21:5678;
server 10.0.1.22:5678;
}
EEFA warning –
ip_hashcan skew traffic when many clients share a NAT IP. Prefer cookie‑based persistence if you need stickiness without uneven distribution.
2.10 Test and reload
sudo nginx -t # syntax check sudo systemctl reload nginx
Verify the balancer responds:
curl -I https://n8n.example.com
You should see 200 OK and a Server: nginx header.
3. HAProxy Load‑Balancing Configuration
If you encounter any horizontal scaling with redis queue resolve them before continuing with the setup.
3.1 Install HAProxy (CentOS/RHEL)
sudo yum install -y haproxy sudo systemctl enable --now haproxy
Installs HAProxy and starts it as a service.
3.2 Global and default settings
Add the following to /etc/haproxy/haproxy.cfg:
global
log /dev/log local0
daemon
maxconn 5000
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
timeout connect 5s
timeout client 50s
timeout server 50s
timeout check 5s
Sets process‑wide limits and basic HTTP defaults.
3.3 Frontend – TLS termination and routing
frontend n8n_front
bind *:80
bind *:443 ssl crt /etc/letsencrypt/live/n8n.example.com/fullchain.pem alpn h2,http/1.1
http-request redirect scheme https unless { ssl_fc }
acl is_websocket hdr(Upgrade) -i WebSocket
use_backend n8n_ws if is_websocket
default_backend n8n_http
Listens on both ports, redirects plain HTTP, and directs WebSocket traffic to a dedicated backend.
3.4 Backend for HTTP API & UI
backend n8n_http
balance roundrobin
option httpchk GET /healthz
http-check expect status 200
server n8n01 10.0.1.21:5678 check inter 5s rise 2 fall 3
server n8n02 10.0.1.22:5678 check inter 5s rise 2 fall 3
Performs health checks against /healthz and distributes requests evenly.
3.5 Backend for WebSocket – cookie‑based stickiness
backend n8n_ws
balance roundrobin
cookie SERVERID insert indirect nocache
server n8n01 10.0.1.21:5678 check cookie n8n01
server n8n02 10.0.1.22:5678 check cookie n8n02
Ensures a client stays on the same node for the duration of a WebSocket session.
3.6 Reload HAProxy safely
sudo haproxy -c -f /etc/haproxy/haproxy.cfg # config test sudo systemctl reload haproxy
Confirm routing works:
curl -k https://n8n.example.com/api/v1/ping
A JSON response { "ping": "pong" } indicates success.
4. Verification, Monitoring & Troubleshooting
4.1 Core health checks
| Check | Command | Expected Result |
|---|---|---|
| Load‑balancer health | curl -I https://n8n.example.com/healthz |
200 OK from **all** back‑ends |
| Sticky session | Open two browser tabs, start a workflow in each | Both tabs stay on the same backend (inspect X-Forwarded-For in logs) |
| WebSocket continuity | Open n8n UI, watch network tab for 101 Switching Protocols |
No “connection reset” errors |
| Request distribution | Nginx: sudo nginx -s status or HAProxy stats page (/stats) |
Balanced request counts across nodes |
| Error logs | tail -f /var/log/nginx/error.log or /var/log/haproxy.log |
No 502 Bad Gateway unless a node is truly down |
4.2 Common pitfalls & fixes
| Symptom | Likely Cause | Fix |
|---|---|---|
| `502 Bad Gateway` after a node restart | Health check still marks node **up** while service is down | Reduce inter/fall values or enable slow-start. |
| WebSocket disconnects | Missing Upgrade/Connection headers |
Ensure the WebSocket location block (NGINX) or n8n_ws backend (HAProxy) is present. |
| Workflow webhook URLs point to internal IPs | WEBHOOK_URL not set to public domain |
Set WEBHOOK_URL=https://n8n.example.com on every n8n instance. |
| Uneven traffic | ip_hash used with many clients behind a NAT |
Switch to cookie‑based persistence or remove sticky config. |
5. Scaling Beyond Two Instances
- Add more upstream servers – simply append additional
server X:X:5678lines. - Dynamic service discovery – integrate Consul, etcd, or Docker Swarm’s
server-template(HAProxy) to auto‑populate the pool. - Kubernetes – expose n8n via a
ClusterIPService and let an Ingress controller (NGINX Ingress) handle load‑balancing; the same upstream concepts apply. - Rate limiting – protect downstream workers:
NGINX:limit_req zone=one burst=5 nodelay;
HAProxy:stick-table type ip size 200k expire 30s store gpc0
EEFA recommendation – When scaling past 5–10 nodes, off‑load TLS termination to an edge proxy (e.g., Cloudflare or AWS ALB) and keep the internal balancer HTTP‑only. This reduces CPU load on Nginx/HAProxy and simplifies certificate management.
Conclusion
Deploying a reverse‑proxy load balancer in front of multiple n8n instances eliminates the single‑point‑of‑failure bottleneck and provides:
- High availability – health checks automatically remove unhealthy nodes.
- Scalability – add servers without changing client‑facing URLs.
- Performance – TLS termination and keep‑alive connections reduce latency.
- Reliability – sticky sessions (via cookies) keep WebSocket connections stable while preserving balanced traffic.
Follow the step‑by‑step Nginx or HAProxy configurations above, verify health endpoints, and monitor request distribution. Your n8n deployment will now handle concurrent workflows with production‑grade resilience.



