
Who this is for: n8n administrators and DevOps engineers running queue‑mode in production who need actionable logs to troubleshoot stuck or failed jobs. We cover this in detail in the n8n Queue Mode Errors Guide.
Quick Diagnosis & One‑Line Fix
Problem: n8n shows “Queue mode logging not enabled” and execution logs are missing for queued jobs.
Fix: Enable the logging flag and restart the service.
# Enable queue‑mode logging export N8N_QUEUE_MODE_LOGGING=true # or add to .env / Docker env # Restart n8n docker restart n8n # or: systemctl restart n8n
After the restart, logs appear under ~/.n8n/queue.log (Docker → /root/.n8n/queue.log).
1. Why Queue‑Mode Logging Matters ?
If you encounter any n8n queue mode memory leak resolve them before continuing with the setup.
| Situation | Without Logging | With Logging |
|---|---|---|
| Job stays queued > 5 min | Only “queued” status in UI | Timestamped enqueue, dequeue, error events |
| Worker crashes unexpectedly | No stack trace, generic “worker stopped” | Full Node.js stack trace + n8n version |
| Scaling with Redis / BullMQ | No visibility of message consumption | Message IDs, retry counts, back‑off timings recorded |
EEFA note – In production ship logs to a centralized system (e.g., Loki, Elasticsearch) and keep
N8N_LOG_LEVEL=debugonly briefly. Persistent debug logs can expose workflow secrets.
2. Enabling Queue‑Mode Logging in Different Deployment Scenarios
2.1 Docker‑Compose (most common)
Step 1 – Add the flag to your compose file
services:
n8n:
image: n8nio/n8n:latest
environment:
- N8N_QUEUE_MODE=true
- N8N_QUEUE_MODE_LOGGING=true # ← enable logging
Step 2 – Keep your existing runtime settings
- N8N_QUEUE_MODE_MAX_CONCURRENCY=5
- N8N_LOG_LEVEL=info
volumes:
- ./n8n-data:/root/.n8n
ports:
- "5678:5678"
Step 3 – Re‑create the container
docker-compose up -d --force-recreate n8n
2.2 Kubernetes (Helm chart)
helm upgrade n8n n8n/n8n \ --set env.N8N_QUEUE_MODE=true \ --set env.N8N_QUEUE_MODE_LOGGING=true \ --set env.N8N_LOG_LEVEL=info
Verify the pod is logging:
kubectl logs -f $(kubectl get pods -l app=n8n -o jsonpath="{.items[0].metadata.name}") -c n8n
2.3 Bare‑Metal / Systemd
Add the variables to the service file
[Service] Environment="N8N_QUEUE_MODE=true" Environment="N8N_QUEUE_MODE_LOGGING=true" Environment="N8N_LOG_LEVEL=info"
Reload systemd and restart n8n
systemctl daemon-reload systemctl restart n8n
3. Where the Logs Live & How to Read Them ?
| Deployment | Log File Path | Example Entry |
|---|---|---|
| Docker (default volume) | /root/.n8n/queue.log | 2024-10-12T14:03:27.123Z INFO Queue: Enqueued workflow “123” (executionId=456) |
| Kubernetes (stdout) | kubectl logs <pod> | 2024-10-12T14:03:27.123Z INFO Queue: Dequeued workflow “123” |
| Systemd | /var/log/n8n/queue.log | 2024-10-12 14:03:27,123 – INFO – Queue: Worker crashed – error: ECONNREFUSED |
3.1 Log entry anatomy
Queue: workflow "" (executionId=) [optional details]
Action – Enqueued / Dequeued / Retried / Failed / WorkerStarted / WorkerStopped
Optional details – error, retryCount, backoffMs, etc.
3.2 Filtering for failures
grep -i "Failed" ~/.n8n/queue.log | tail -n 20
3.3 Correlating with the UI Queue view
| UI Column | Log Field |
|---|---|
| Status | Action (Enqueued → Dequeued → Success/Failed) |
| Started At | Timestamp of Dequeued |
| Finished At | Timestamp of Success or Failed |
| Error Message | error payload in the log line |
4. Common Pitfalls & How to Fix Them
| Symptom | Likely Cause | Fix |
|---|---|---|
No queue.log after enabling the flag |
Variable not exported to the runtime | Run docker exec n8n printenv | grep N8N_QUEUE_MODE_LOGGING to verify |
| Log file > 1 GB in 24 h | N8N_LOG_LEVEL=debug together with logging |
Switch to info level and enable rotation (see §5) |
| “Permission denied” when reading the file | Volume mounted read‑only or wrong UID/GID | Ensure container runs as UID 1000 or chown -R 1000:1000 ~/.n8n |
| Missing queue entries for some workflows | Workflow uses Execute Workflow node with runOnce flag, bypassing the queue |
Disable runOnce or set N8N_EXECUTE_WORKFLOW_MODE=queue globally |
EEFA warning – Leaving
N8N_QUEUE_MODE_LOGGING=trueon a high‑traffic instance without rotation can fill the disk and crash n8n. Deploy log rotation or external aggregation immediately. If you encounter any n8n queue mode queue retry limit exceeded resolve them before continuing with the setup.
5. Log Rotation & Retention (Production‑Ready)
Host‑side logrotate configuration (for Systemd or bind‑mounted Docker logs):
/var/log/n8n/queue.log {
daily
rotate 14 # keep two weeks
compress
missingok
notifempty
copytruncate # works with the running process
create 0640 n8n n8n
postrotate
# Optional: signal n8n to reopen logs
kill -USR1 $(cat /var/run/n8n.pid) 2>/dev/null || true
endscript
}
Docker alternative – bind‑mount a host directory (e.g., ./logs:/root/.n8n) and let the host’s logrotate manage queue.log.
6. End‑to‑End Troubleshooting Checklist
- 1 – Confirm
N8N_QUEUE_MODE=true. - 2 – Verify
N8N_QUEUE_MODE_LOGGING=truein the process environment. - 3 – Restart n8n cleanly (avoid hot‑reload).
- 4 – Locate the appropriate
queue.logfor your deployment. - 5 – Tail the log while reproducing the issue:
tail -f queue.log. - 6 – Search for
Failedorerrorentries; note theexecutionId. - 7 – Cross‑reference the
executionIdin the UI → Executions → Details. - 8 – If logs are absent, inspect file permissions and Docker volume mounts.
- 9 – Ensure log rotation is active to prevent disk‑full crashes.
- 10 – Document recurring error patterns in your incident runbook.
8. Next Steps for Power Users
- Centralize logs – Ship
queue.logto Loki/ELK and build a Grafana dashboard that visualizes Enqueued → Dequeued → Success/Failed latency per workflow. - Alert on failures – Use a Prometheus exporter that watches the log file and fires alerts when the failure rate exceeds a configurable threshold.
- Enable structured JSON logging (experimental):
export N8N_QUEUE_MODE_LOGGING_JSON=true # add to env
Parse the JSON output with
jqor your SIEM for richer analytics.
Conclusion
Enabling N8N_QUEUE_MODE_LOGGING provides the missing visibility needed to diagnose queue stalls, worker crashes, and scaling anomalies. By directing logs to a predictable file, filtering for failures, and instituting rotation, you protect production stability while gaining actionable insights. Apply the checklist, keep logging levels appropriate, and integrate the logs into your centralized observability stack to ensure reliable, maintainable n8n queue‑mode operations.



