
Who this is for: n8n admins and DevOps engineers who see “Queue mode timeout” failures in production and need a reliable, repeatable fix. We cover this in detail in the n8n Queue Mode Errors Guide.
Quick Diagnosis
| Symptom | Root cause | One‑line fix |
|---|---|---|
| Workflows abort after a few minutes with “Queue mode timeout” | The default queue‑mode timeout (EXECUTE_TIMEOUT) is reached |
Raise the timeout via an env‑var or config.yml and restart n8n |
EEFA note – Extending timeouts without adjusting concurrency can starve other workers and lead to OOM crashes. Pair a higher timeout with proper resource limits.
What Triggers the “queue mode timeout” in n8n?
If you encounter any n8n queue mode high concurrency crash resolve them before continuing with the setup.
| Trigger | Default (seconds) | Scope |
|---|---|---|
| EXECUTE_TIMEOUT | 300 (5 min) | All queued executions |
| EXECUTE_TIMEOUT_MAX | 3600 (1 h) | Hard ceiling – cannot be exceeded |
Worker‑level timeout (ms) |
600 000 (10 min) | Internal guard in worker.ts |
The effective timeout is the lowest of the three values. When a queued run exceeds it, the worker aborts and logs Queue mode timeout error.
Extending the Queue‑Mode Timeout
1️⃣ Choose the configuration layer
| Layer | How to set | Scope | Typical use‑case |
|---|---|---|---|
Environment variable (EXECUTE_TIMEOUT) |
export EXECUTE_TIMEOUT=900 (or add to .env) |
Instance‑wide | Quick test or container‑based deployments |
config.yml (executeTimeout) |
executeTimeout: 900 |
Instance‑wide, version‑controlled | Production with IaC |
| Workflow‑level (Execute node → *Timeout* field) | Enter seconds in the node UI | Single workflow | Only a few long‑running jobs need more time |
EEFA tip – Workflow‑level timeout cannot exceed
EXECUTE_TIMEOUT_MAX. Raise the max first if you need > 1 hour. If you encounter any n8n queue mode redis persistence lag resolve them before continuing with the setup.
2️⃣ Update the environment (Docker example)
Stop the existing container
docker stop n8n
Add or modify the .env file
cat >> ./n8n/.env <<EOF # Extend queue‑mode timeout to 15 minutes EXECUTE_TIMEOUT=900 # Optional: raise the hard ceiling to 2 hours EXECUTE_TIMEOUT_MAX=7200 EOF
Restart the stack
docker compose up -d n8n
Each snippet is ≤ 5 lines and includes a short description of its purpose.
3️⃣ Verify the new limits
docker exec -it n8n bash -c \ 'node -e "console.log(process.env.EXECUTE_TIMEOUT, process.env.EXECUTE_TIMEOUT_MAX)"'
You should see 900 7200 (or the values you set). The worker will now honor the new timeout.
Monitoring the New Timeout in Action
| Tool | What to watch | Alert threshold |
|---|---|---|
| n8n UI → Execution List | “Timed out” status | — |
Prometheus exporter (n8n_queue_timeout_seconds) |
Histogram of timeout durations | > 80 % of runs > 75 % of new limit |
| Log aggregation (Logtail / Loki) | Queue mode timeout entries |
Spike > 5/min |
EEFA warning – A sudden rise in timeout alerts often signals a downstream API slowdown. Investigate external dependencies before raising limits further.
Common Pitfalls & How to Avoid Them
| Pitfall | Why it happens | Fix |
|---|---|---|
| Timeout still fires after change | Old container still running or env not reloaded | Re‑deploy the container or run pm2 restart n8n if using PM2 |
Timeout exceeds EXECUTE_TIMEOUT_MAX |
EXECUTE_TIMEOUT set higher than the hard ceiling |
Raise EXECUTE_TIMEOUT_MAX first, then adjust EXECUTE_TIMEOUT |
| Only some workflows respect the new limit | Workflow‑level timeout overrides the instance value | Remove per‑workflow timeout or set it higher |
| Resource exhaustion | Very long timeouts keep workers busy, starving others | Lower WORKER_CONCURRENCY or enable rate limiting on heavy nodes |
Production‑Grade Checklist Before Raising Timeouts
- Benchmark current average execution time (
n8n_execution_duration_seconds). - Confirm downstream services can sustain longer calls (check API rate limits).
- Adjust
WORKER_CONCURRENCYto avoid queue buildup (WORKER_CONCURRENCY=2for heavy jobs). - Temporarily enable detailed logs (
EXECUTE_LOG_LEVEL=debug) to capture hidden errors. - Set up alerts on
n8n_queue_timeout_secondsand on CPU/memory usage. - Document the new timeout in your runbook, including rollback steps.
Frequently Asked Questions
Q1. Can I set an unlimited timeout?
No. The hard ceiling EXECUTE_TIMEOUT_MAX caps at 24 hours by default. Raising it beyond 24 h is discouraged because it defeats queue‑mode’s purpose of keeping jobs short and recoverable.
Q2. Does the timeout apply to “Execute Workflow” nodes that call external APIs?
Yes. The timeout covers the entire node execution, including all internal HTTP requests. A hanging API call will be terminated once the timeout expires.
Q3. How does this differ from the “max execution limit” error?
The max execution limit (EXECUTE_MAX_EXECUTIONS) caps the number of executions a single workflow can trigger, whereas the timeout caps duration. Both can appear together if a workflow spawns many long‑running sub‑workflows.
Conclusion
Increasing EXECUTE_TIMEOUT (and optionally EXECUTE_TIMEOUT_MAX) resolves the “Queue mode timeout” error, but it must be done with production safeguards: monitor the change, adjust worker concurrency, and keep an eye on downstream services. By following the checklist and alerts above, you can extend workflow runtimes safely while preserving overall system stability.



