
I spent three hours chasing a webhook that kept returning http://localhost:5678 in production – even though n8n was running live behind a reverse proxy. The fix was one environment variable: WEBHOOK_URL. I’ve since configured n8n for dozens of production setups. This page is the reference I wish existed the first time.
⚠️ 5 Variables You Must Set Before First Production Boot
Most n8n pain in production traces back to skipping these five. Set them before you do anything else.
| Variable | Why it can’t wait | Safe value to start |
|---|---|---|
N8N_ENCRYPTION_KEY |
If you lose this after credentials are saved, every credential in the DB becomes unreadable — no recovery. | openssl rand -hex 32 → store in a secret manager |
WEBHOOK_URL |
Without this, webhook URLs display as localhost:5678 even in production. External services can’t reach them. |
https://n8n.yourdomain.com/ |
N8N_HOST |
Sets the hostname n8n binds to. Required behind reverse proxies to construct correct internal URLs. | n8n.yourdomain.com |
DB_TYPE |
Default is SQLite — fine for dev, breaks under production load. Migrate before data accumulates, not after. | postgresdb for production |
EXECUTIONS_TIMEOUT |
Default is 0 (no timeout). One stuck workflow will block workers and OOM the container. Always set a ceiling. | 3600 (1 hour max) |
Who this is for: System administrators and DevOps engineers running n8n in production who need low‑latency, high‑throughput workflows without rewriting code. We cover this in detail in the n8n Performance & Scaling Guide.
Quick Diagnosis
If your n8n instance feels sluggish, stalls on large payloads, or exhausts memory under load, the root cause is often mis‑configured environment variables. Adjusting the right variables (e.g., EXECUTIONS_PROCESS, MAX_BINARY_DATA_SIZE, WORKER_CONCURRENCY) can instantly boost throughput.
Featured‑snippet fix – add the variables to your Docker‑Compose file, restart, and you’ll typically see a 20‑30 % reduction in average execution latency.
services:
n8n:
image: n8nio/n8n
environment:
- EXECUTIONS_PROCESS=main
- MAX_BINARY_DATA_SIZE=50 # MB
- WORKER_CONCURRENCY=5
1. Why Environment Variables Matter for n8n Performance ?
If you encounter any cost optimization resolve them before continuing with the setup.
Execution‑model variables
| Variable | Default | What it controls |
|---|---|---|
| EXECUTIONS_PROCESS | main | Runs workflows in the main Node process (main) or forks a child process (queue) per execution. |
| WORKER_CONCURRENCY | 1 | Number of parallel workers when EXECUTIONS_PROCESS=queue. |
| EXECUTIONS_TIMEOUT | 0 (no timeout) | Max seconds a workflow may run before being aborted. |
Payload‑size & logging variables
| Variable | Default | What it controls |
|---|---|---|
| MAX_BINARY_DATA_SIZE | 10 MB | Upper bound for binary data (files, base64 blobs) that can be stored in the DB. |
| N8N_LOG_LEVEL | info | Verbosity of internal logs (debug, trace increase I/O). |
| N8N_DISABLE_PROFILING | false | Disables the built‑in CPU profiling endpoint. |
All variables are read once at startup, so any change requires a container or process restart.
1a. N8N_HOST, WEBHOOK_URL, and N8N_PROTOCOL – The Network Variables
These three variables are the most common source of production failures for n8n deployments running behind a reverse proxy (Nginx, Traefik, Caddy). Most devs get stuck here because n8n constructs its own URLs internally — and if these aren’t set, it builds them wrong.
How n8n builds its webhook URL internally:
Webhook URL = N8N_PROTOCOL + "://" + N8N_HOST + ":" + N8N_PORT + "/webhook/..." # Without these set, n8n defaults to: # http://localhost:5678/webhook/... ← wrong in production
Variable reference
| Variable | Default | What it controls |
|---|---|---|
N8N_HOST |
localhost |
The hostname n8n binds to and uses when constructing internal URLs. Set to your public domain. |
N8N_PORT |
5678 |
The port n8n listens on. Keep at 5678 internally; your reverse proxy handles 443 externally. |
N8N_PROTOCOL |
http |
Protocol used when building URLs. Set to https when running behind SSL termination. |
WEBHOOK_URL |
auto-built from above | Overrides the auto-built URL entirely. Use this when behind a reverse proxy — it’s the most reliable fix. |
N8N_PROXY_HOPS |
0 |
Number of reverse proxies between the internet and n8n. Set to 1 when behind Nginx/Traefik. |
Correct config for a reverse-proxy setup
environment: - N8N_HOST=n8n.yourdomain.com - N8N_PORT=5678 - N8N_PROTOCOL=https - WEBHOOK_URL=https://n8n.yourdomain.com/ - N8N_PROXY_HOPS=1
The most common mistake: Setting N8N_HOST but forgetting WEBHOOK_URL. When behind a reverse proxy, n8n runs on port 5678 internally but is exposed on port 443 externally. The auto-calculated URL will include :5678 which external services can’t reach. Always set WEBHOOK_URL explicitly to the full public URL including protocol.
1b. DB_* Variables – Database Configuration
n8n defaults to SQLite — which is fine for local testing but breaks under production load: database lock errors, slow execution history queries, and no support for queue mode with multiple workers. The DB_* variable group switches you to PostgreSQL.
When to move from SQLite to PostgreSQL:
| Condition | Stay on SQLite | Move to PostgreSQL |
|---|---|---|
| Concurrent workers | Single worker only | Any queue mode setup |
| Execution volume | < 500/day | > 500/day or growing |
| Reliability requirement | Dev / internal tools | Customer-facing, production |
| Backup / HA needs | None | Any backups or read replicas |
Core DB_* variables
| Variable | Default | What it controls |
|---|---|---|
DB_TYPE |
sqlite |
Database backend. Use postgresdb for production. |
DB_POSTGRESDB_HOST |
— | Hostname of your PostgreSQL server (e.g., postgres in Docker Compose, or a managed host). |
DB_POSTGRESDB_PORT |
5432 |
PostgreSQL port. Default is almost always correct. |
DB_POSTGRESDB_DATABASE |
n8n |
Database name to connect to. |
DB_POSTGRESDB_SCHEMA |
public |
Schema within the database. Change if you share a DB with other apps. |
DB_POSTGRESDB_USER |
— | PostgreSQL username. |
DB_POSTGRESDB_PASSWORD |
— | Password. Use DB_POSTGRESDB_PASSWORD_FILE instead to load from a Docker/K8s secret. |
DB_POSTGRESDB_POOL_SIZE |
2 |
Number of pooled connections. Increase to 10 when using queue mode with multiple workers. |
DB_POSTGRESDB_SSL_ENABLED |
false |
Enable TLS to the database. Always set to true for managed cloud databases (RDS, Supabase, etc.). |
Copy-paste Docker Compose: n8n + PostgreSQL
version: "3.8"
services:
postgres:
image: postgres:15
restart: unless-stopped
environment:
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=your_secure_password
- POSTGRES_DB=n8n
volumes:
- postgres_data:/var/lib/postgresql/data
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
ports:
- "5678:5678"
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=your_secure_password
- DB_POSTGRESDB_POOL_SIZE=10
- N8N_HOST=n8n.yourdomain.com
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://n8n.yourdomain.com/
- N8N_ENCRYPTION_KEY=your_64_char_hex_key
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- postgres
volumes:
postgres_data:
n8n_data:
1c. Security Variables – N8N_ENCRYPTION_KEY and Auth
These variables are skipped most often and cause the most catastrophic failures. A lost N8N_ENCRYPTION_KEY means every credential stored in your database becomes permanently unreadable — n8n has no recovery path for this.
| Variable | Default | What it controls |
|---|---|---|
N8N_ENCRYPTION_KEY |
auto-generated | AES key used to encrypt all stored credentials. Auto-generated if not set — which means it changes on every new container, invalidating all credentials. |
N8N_BASIC_AUTH_ACTIVE |
false |
Enables HTTP basic auth on the n8n UI. Required if you expose n8n without SSO. |
N8N_BASIC_AUTH_USER |
— | Username for basic auth. |
N8N_BASIC_AUTH_PASSWORD |
— | Password for basic auth. Use a secret file in production. |
N8N_BLOCK_ENV_ACCESS_IN_NODE |
false |
Prevents workflow Code nodes from reading process environment variables. Set to true in production — otherwise any workflow can read your secrets. |
Generate a safe encryption key:
# Generate a 64-character hex key (do this once, store permanently) openssl rand -hex 32 # Then set it in your environment — never let n8n auto-generate this N8N_ENCRYPTION_KEY=paste_your_64_char_key_here
Critical rule: Store N8N_ENCRYPTION_KEY in a secret manager (AWS Secrets Manager, Vault, or even a password manager) before your first boot. If you redeploy without it, you will re-enter every single credential manually.
2. Configuring Variables in Different Deployment Scenarios
2.1 Docker‑Compose (most common)
Add the variables to the service definition and mount a persistent volume for the database.
version: "3.8"
services:
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
ports:
- "5678:5678"
environment:
- EXECUTIONS_PROCESS=queue
- WORKER_CONCURRENCY=4
- MAX_BINARY_DATA_SIZE=100 # MB
- N8N_LOG_LEVEL=warn
volumes:
- ./n8n_data:/home/node/.n8n
EEFA note: Setting WORKER_CONCURRENCY higher than the number of physical CPU cores can cause context‑switch thrashing. Use nproc on the host to verify core count.
2.2 Kubernetes (Helm chart)
Inject the variables via the pod spec. Adjust CPU limits to match the worker count.
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n
spec:
replicas: 1
selector:
matchLabels:
app: n8n
template:
metadata:
labels:
app: n8n
spec:
containers:
- name: n8n
image: n8nio/n8n:latest
env:
- name: EXECUTIONS_PROCESS
value: "queue"
- name: WORKER_CONCURRENCY
value: "6"
- name: MAX_BINARY_DATA_SIZE
value: "200"
- name: N8N_LOG_LEVEL
value: "error"
resources:
limits:
cpu: "2000m"
memory: "2Gi"
EEFA note: When using queue, ensure the pod’s CPU limit is at least WORKER_CONCURRENCY × 250m to avoid throttling. If you encounter any workflow design best practices resolve them before continuing with the setup.
2.3 Stand‑alone Node.js (local dev)
Create a .env file in the project root and let dotenv load it.
EXECUTIONS_PROCESS=main MAX_BINARY_DATA_SIZE=20 N8N_LOG_LEVEL=debug
Run the instance:
npm run start # or: n8n start
3. Deep‑Dive: Tuning the Execution Model
3.1 EXECUTIONS_PROCESS=main vs queue
| Aspect | main | queue |
|---|---|---|
| Isolation | No separate process; one bad workflow can crash the whole instance. | Each execution runs in its own child process, protecting the main thread. |
| Memory overhead | Low (single Node heap). | Higher – each worker spawns its own V8 heap (≈ 150 MB per worker). |
| Latency | Minimal IPC → lower per‑execution latency. | Slight overhead for spawning/communicating with workers. |
| Scaling | Limited by single‑threaded event loop. | Scales with WORKER_CONCURRENCY. |
When to use queue
- CPU‑intensive JavaScript or large binary transformations.
- Multi‑tenant environments that need hard isolation.
When to stay on main
- Light‑weight automations (<10 ms avg).
- Memory is a premium (e.g., low‑end VPS).
EEFA warning: Switching to queue on a 512 MB VPS without increasing RAM will cause OOM kills. Pair with MAX_BINARY_DATA_SIZE adjustments and a memory‑limit alert.
3.2 Optimizing WORKER_CONCURRENCY
- Detect core count on the host:
nproc # Linux sysctl -n hw.logicalcpu # macOS
- Set a safe baseline (leave one core free for OS tasks):
export WORKER_CONCURRENCY=$(($(nproc) - 1))
- Benchmark with the built‑in tester (requires
n8n-cli):n8n test:performance --duration 60 --workers $WORKER_CONCURRENCY
Review the “avg latency” and “max concurrent executions” columns. If latency plateaus, you’ve hit CPU saturation.
3.3 Managing MAX_BINARY_DATA_SIZE
Large file uploads (e.g., PDF processing) can exceed the default 10 MB limit, causing Binary data too large errors.
Step‑by‑step increase
# Decide a safe ceiling based on storage budget. MAX_SIZE=200 # MB # Export the variable for the current session. export MAX_BINARY_DATA_SIZE=$MAX_SIZE # Restart n8n to apply the change. docker restart n8n
EEFA tip: When using PostgreSQL, ensure the column binary_data is defined as bytea with sufficient max_length. Some managed DB services cap bytea at 1 GB; stay well below that. If you encounter any upgrading n8n versions resolve them before continuing with the setup.
4. Troubleshooting Checklist
| Symptom | Likely variable | Diagnostic step | Fix |
|---|---|---|---|
| Frequent “Execution timed out” | EXECUTIONS_TIMEOUT (set to 0 unintentionally) | Search logs for Execution timeout messages. |
Set EXECUTIONS_TIMEOUT=300 (or appropriate seconds). |
| High CPU usage, occasional spikes | EXECUTIONS_PROCESS=queue with too many workers | top/htop shows many node processes. |
Reduce WORKER_CONCURRENCY or switch back to main. |
| “Binary data too large” errors | MAX_BINARY_DATA_SIZE too low | Look for Binary data too large in logs. |
Increase value; verify DB storage capacity. |
| Log spam slowing I/O | N8N_LOG_LEVEL=debug | Disk I/O spikes; log file size grows rapidly. | Change to warn or error. |
| Container restarts (OOM) | Combined high WORKER_CONCURRENCY + large MAX_BINARY_DATA_SIZE | docker logs shows Out of memory. |
Lower concurrency, increase host RAM, or enable swap (not recommended for production). |
4a. Debug Decision Tree – “If This Symptom, Check This Variable”
Most devs get stuck here because they see a symptom and start guessing. Use this tree to go straight to the variable causing the problem.
🔍 Start: What symptom are you seeing?
Webhook URLs show localhost:5678?
→ Check WEBHOOK_URL is set to your full public URL
→ Check N8N_PROXY_HOPS=1 if behind a reverse proxy
→ Restart required after change
All credentials invalid after redeployment?
→ N8N_ENCRYPTION_KEY changed or missing
→ If key is lost: no recovery — all credentials must be re-entered
→ Fix: always persist this key in a secret manager
Workflow execution stuck / never completes?
→ Check EXECUTIONS_TIMEOUT — if set to 0, there is no timeout
→ Check docker stats — is the container OOM?
→ Check docker logs n8n | grep "Execution" for stuck job IDs
Container keeps restarting (OOM crash)?
→ Check WORKER_CONCURRENCY — each worker = ~150 MB RAM
→ Check MAX_BINARY_DATA_SIZE — is it set higher than available RAM?
→ Fix: reduce WORKER_CONCURRENCY to nproc - 1
New n8n container can’t connect to database?
→ Check DB_POSTGRESDB_HOST — is it the service name in Docker Compose?
→ Check DB_POSTGRESDB_PASSWORD matches the DB user
→ Check DB_POSTGRESDB_SSL_ENABLED — managed DBs often require TLS
Execution history shows nothing / clears too fast?
→ Check EXECUTIONS_DATA_SAVE_ON_SUCCESS — may be set to none
→ Check EXECUTIONS_DATA_PRUNE and EXECUTIONS_DATA_MAX_AGE
4b. Real Log Patterns and What They Mean?
Here are the actual log lines you’ll see in production and exactly which variable causes each one:
Error: Binary data too large
Error: Binary data of size 45.2 MB exceeds the limit of 10 MB. at BinaryDataManager.storeBinaryData (/usr/local/lib/node_modules/n8n/node_modules/...
→ Variable to set: MAX_BINARY_DATA_SIZE=50 (or higher). Restart the container after.
Webhook returning localhost in production
# Webhook URL shown in editor UI: http://localhost:5678/webhook/abc123 # External service logs: curl: (7) Failed to connect to localhost port 5678: Connection refused
→ Variable to set: WEBHOOK_URL=https://n8n.yourdomain.com/ — this overrides the auto-calculated URL entirely.
Credentials decryption failure after redeployment
Error: The credentials for the node could not be decrypted. Could not decrypt credential data for "My Gmail Account" at Credentials.getData (/usr/local/lib/node_modules/n8n/...
→ Variable that changed: N8N_ENCRYPTION_KEY. The new container generated a different key. Restore the original key value and restart.
Container OOM — too many workers
# docker logs output: [2026-02-01T14:22:11.334Z] fatal error: Reached heap limit Allocation failed - JavaScript heap out of memory Killed # docker inspect shows: "OOMKilled": true
→ Variable to reduce: WORKER_CONCURRENCY. Each worker uses ~150 MB of RAM. On a 1 GB VPS, max safe value is 4.
5. Performance Validation – How to Measure the Impact?
5.1 Capture a baseline
n8n test:performance --duration 300 --workers 1 > baseline.txt
5.2 Apply variable changes
Edit your deployment (Docker‑Compose, Helm values, or .env) and restart the service.
5.3 Re‑run the test
n8n test:performance --duration 300 --workers $WORKER_CONCURRENCY > after.txt
5.4 Compare key metrics
| Metric | Baseline | After change | Δ% |
|---|---|---|---|
| Avg latency (ms) | 120 | 85 | ‑29% |
| Max concurrent executions | 3 | 7 | +133% |
| CPU avg % | 45 | 70 | +55% (ensure within host limits) |
| Memory usage (MiB) | 512 | 780 | +52% (watch for OOM) |
Interpretation: If latency improves without breaching CPU/memory caps, the tuning is successful. Otherwise, iterate on WORKER_CONCURRENCY or revert to main.
EEFA caution: Never run load tests against the live production database. Clone it (e.g., pg_dump / pg_restore) to a staging instance and point the test runner there.
6. Frequently Asked Questions:
Do I need to restart n8n after changing environment variables?
Yes, always. n8n reads all environment variables once at startup and caches them in memory. Changing a variable in your Docker Compose file or .env has no effect until you restart the container. In queue mode, you must restart both the main process and all worker processes for the change to take effect everywhere.
What happens if I lose my N8N_ENCRYPTION_KEY?
Every credential stored in the database becomes permanently unreadable. n8n has no recovery mechanism — the encryption is one-way without the key. You will have to re-enter every API key, OAuth token, and credential manually. Prevention: generate the key with openssl rand -hex 32 before first boot and store it in a secret manager immediately.
What is the difference between WEBHOOK_URL and N8N_HOST?
N8N_HOST sets the hostname n8n binds to and uses as part of its auto-calculated URL formula. WEBHOOK_URL overrides the entire auto-calculated URL. When running behind a reverse proxy, always use WEBHOOK_URL — it’s the more direct and reliable fix. Set both for maximum clarity, but WEBHOOK_URL is what actually controls what shows up in the editor UI and what gets registered with external services.
Can I use environment variables without Docker?
Yes. Create a .env file in your project root and n8n will load it via dotenv on startup. You can also export variables directly in your shell session before running n8n start. Shell exports take precedence over the .env file, which in turn takes precedence over n8n’s built-in defaults.
How do I securely pass secrets like DB passwords in Docker?
Append _FILE to supported variable names and point the value to a mounted secret file. For example: DB_POSTGRESDB_PASSWORD_FILE=/run/secrets/db_password. This works with Docker Secrets and Kubernetes Secrets — the value is read from the file at startup rather than stored as a plaintext environment variable, which protects it from being visible in docker inspect output.
Why does n8n say “SQLite” even though I set DB_TYPE=postgresdb?
The variable change wasn’t applied before n8n started. This usually happens when the environment: block in Docker Compose has an indentation error, or when the container was started before the Compose file was saved. Run docker compose config to verify the variables are being read correctly, then do a full docker compose down && docker compose up -d.
What is the n8n container name environment variable?
In Docker Compose, the container name is controlled by the container_name: directive, not an n8n environment variable. However, when n8n containers need to reference each other (for example, a worker referencing the main process), they use the Docker Compose service name as the hostname. The relevant n8n variable is N8N_HOST, which tells n8n what hostname to report externally.
Conclusion
Fine‑grained control of n8n’s environment variables is the quickest lever to extract performance gains.
- Choose the right execution model –
mainfor low‑overhead,queuefor isolation and parallelism. - Size
WORKER_CONCURRENCYto the number of physical CPU cores, then benchmark. - Raise
MAX_BINARY_DATA_SIZEonly as far as your storage budget and DB limits allow. - Trim logging (
N8N_LOG_LEVEL) and disable profiling in production to reduce I/O.
Validate every change with the built‑in performance tester, watch CPU/memory metrics, and you’ll keep latency low while avoiding resource exhaustion. This disciplined, variable‑first approach delivers real‑world, production‑grade speed for any n8n deployment.



