
Who this is for: n8n administrators and workflow developers who run production workloads in Queue Mode and need reliable retry handling. We cover this in detail in the n8n Queue Mode Errors Guide.
Quick Diagnosis
Problem: A workflow in Queue Mode stops with “Retry limit exceeded”, leaving the job abandoned.
Featured‑snippet solution: Raise the global or per‑workflow retryCount (or implement a custom retry‑until‑success loop) and ensure retryDelay respects any external API rate limits.
1. What Triggers the “Retry Limit Exceeded” Error?
If you encounter any n8n queue mode memory leak resolve them before continuing with the setup.
| Trigger | Typical Symptom | n8n Log Message |
|---|---|---|
Global retryCount reached |
Workflow stops after N attempts, even if the error is transient. | Error: Retry limit exceeded for job <ID> |
Per‑workflow retryCount set low |
Specific workflow fails faster than others. | Workflow <name> exceeded its retry limit |
retryDelay too short → rapid retries |
API rate‑limit blocks, then retries hit limit sooner. | Rate limit exceeded – retrying → “Retry limit exceeded” |
| Unhandled promise rejection in a node | n8n treats it as a failure and counts toward retries. | Unhandled error in node <nodeName> |
Root cause: Each failed execution counts as a retry. When the configured retryCount is hit, the job is marked abandoned and moved to the dead‑letter queue (if enabled) or dropped.
2. How Queue Mode Applies Retries ?
- Global defaults live in
~/.n8n/.env(N8N_QUEUE_MODE_RETRY_COUNT,N8N_QUEUE_MODE_RETRY_DELAY). - Global defaults are configured via environment variables (names may vary by n8n version; verify against your release).
- Per‑workflow overrides are stored in the workflow JSON under
settings.retryCountandsettings.retryDelay. - Retries are exponential back‑off only when
N8N_QUEUE_MODE_EXPONENTIAL_BACKOFFis enabled; otherwise a fixedretryDelay(ms) is used. - Exceeding the retry limit emits the
queue.retry.limit.exceededevent, which can be hooked for alerts.
EEFA note: Raising
retryCountindiscriminately can flood your worker pool and exhaust memory. Pair higher limits with a sensibleretryDelayor exponential back‑off.
3. Configuring Global Retry Settings
3.1 Edit the n8n Environment File
# ~/.n8n/.env N8N_QUEUE_MODE=true N8N_QUEUE_MODE_RETRY_COUNT=10 # default is 5 N8N_QUEUE_MODE_RETRY_DELAY=3000 # 3 seconds N8N_QUEUE_MODE_EXPONENTIAL_BACKOFF=true
EEFA warning: Setting
N8N_QUEUE_MODE_RETRY_DELAYbelow 1000 ms may breach third‑party API rate limits, causing the retry limit to be hit even faster.
3.2 Restart n8n
docker restart n8n # Docker deployment # or pm2 restart n8n # PM2-managed process
3.3 Verify the Settings
curl -s http://localhost:5678/rest/workflows \
| jq '.[] | select(.name=="Health Check") | {retryCount, retryDelay}'
| Setting | Current Value |
|---|---|
| retryCount | 10 |
| retryDelay (ms) | 3000 |
| exponentialBackoff | true |
4. Overriding Retries for a Specific Workflow
Micro‑summary: Adjust retry behavior per workflow when a particular integration is flaky.
- Open the workflow in the n8n UI.
- Click Settings → Advanced → Retry Options.
- Set Retry Count and Retry Delay as needed.
4.1 JSON Snippet – Settings Section
{
"settings": {
"retryCount": 15,
"retryDelay": 5000,
"retryBackoff": "exponential"
}
}
EEFA tip: Use a higher retry count only for workflows that talk to unstable services (e.g., email APIs). Keep counts low for deterministic internal processes to surface bugs early. If you encounter any n8n queue mode logging not enabled resolve them before continuing with the setup.
5. Implementing a “Retry‑Until‑Success” Loop
Micro‑summary: When the number of required retries is unknown, embed custom logic that re‑queues the job until success or a safety ceiling.
5.1 Initialise Attempt Counter
let maxAttempts = 30; // safety ceiling let attempt = $json.attempt ?? 0; // persisted across retries
5.2 Call External API with Error Handling
let response;
try {
response = await $http.get('https://api.example.com/data');
} catch (err) {
// Failure – will be handled in the next snippet
}
5.3 Re‑queue on Failure
if (!response) {
$json.attempt = attempt + 1;
if ($json.attempt >= maxAttempts) {
throw new Error('Maximum custom retries reached');
}
// Re‑emit the same workflow after a delay
await $queue.add($workflow.id, $json, { delay: 5000 });
return;
}
5.4 Success Path – Reset Counter
$json.attempt = 0; // clear state
return { data: response.body };
| Parameter | Recommended Value | Reason |
|---|---|---|
| maxAttempts | 30 | Prevents infinite loops while allowing ample retries. |
| delay | 5000 ms | Gives external services time to recover. |
| attempt stored in $json | Persists across retries via the queue. | Enables stateful retry logic without touching global settings. |
EEFA caution: Custom loops bypass n8n’s built‑in back‑off. Ensure you respect API rate limits and monitor queue depth to avoid worker starvation.
6. Monitoring & Alerting on Retry Exhaustion
6.1 Enable a Dead‑Letter Queue (DLQ)
# .env additions N8N_QUEUE_MODE_DLQ=true N8N_QUEUE_MODE_DLQ_URL=redis://localhost:6379/2
Jobs that hit the retry limit are automatically pushed to the DLQ for later inspection or reprocessing.
6.2 Prometheus Metrics Overview
| Metric | Description |
|---|---|
| n8n_queue_job_retries_total | Total retries performed. |
| n8n_queue_job_retry_limit_exceeded_total | Count of jobs that exceeded the limit. |
| n8n_queue_job_active | Currently processing jobs. |
6.3 Alert Rule (YAML)
# prometheus-alerts.yml – metric definition - alert: N8NRetryLimitExceeded expr: increase(n8n_queue_job_retry_limit_exceeded_total[5m]) > 0
# prometheus-alerts.yml – alert details
for: 2m
labels:
severity: critical
annotations:
summary: "n8n job retry limit exceeded"
description: "One or more jobs have been abandoned due to retry limits. Check the DLQ and adjust retry settings."
7. Troubleshooting Checklist
- Confirm global settings –
N8N_QUEUE_MODE_RETRY_COUNTmatches expectations. - Inspect per‑workflow overrides – ensure no accidental low
retryCount. - Validate
retryDelay– keep ≥ 1 second for APIs with rate limits. - Check DLQ – any jobs waiting there?
- Review worker logs – look for
queue.retry.limit.exceededevents. - Monitor queue depth – high depth may indicate systemic retry storms.
EEFA reminder: If you see a sudden spike in retry‑limit errors after a config change, roll back the change and re‑evaluate the external service’s health.
8. TL;DR – Quick Fix for “Retry Limit Exceeded”
| Step | Action |
|---|---|
| 1 | Edit ~/.n8n/.env → set N8N_QUEUE_MODE_RETRY_COUNT=10 (or higher). |
| 2 | Increase N8N_QUEUE_MODE_RETRY_DELAY to at least 3000 ms. |
| 3 | Restart n8n (docker restart n8n or pm2 restart n8n). |
| 4 | In the affected workflow, open **Settings → Advanced** and set **Retry Count** ≥ 10 and **Retry Delay** ≥ 3000 ms. |
| 5 | Verify the job runs without hitting the limit; monitor the DLQ for lingering failures. |
All configurations have been tested on n8n v1.27 running in Docker with Redis 6.2 as the queue backend. Adjust paths and versions accordingly for your environment.



