n8n 429 Too Many Requests – exponential backoff and retry fix

Step by Step Guide to solve n8n Rate-Limit Exceeded Error

 


 

Who this is for: n8n developers and automation engineers who run production‑grade workflows that call external APIs or rely on n8n’s internal execution queue.  This guide is part of the n8n Authentication Errors Guide.


Quick Diagnosis

Problem: Workflows stop with “Rate‑Limit Exceeded” (HTTP 429) when n8n makes too many calls to an external API or hits n8n’s own execution throttle.

Quick fix (30‑second solution):

  1. Insert a Delay node (or enable Rate Limit in the HTTP Request node) before the failing API call.
  2. Set the delay to respect the API’s reset time, e.g., 5 seconds.
  3. Turn on Retry on Fail with Exponential Backoff (max 3 retries).

Result: The workflow respects the remote service’s quota, avoids 429 errors, and resumes automatically.
For a production‑grade solution, read on for root‑cause analysis, configurable limits, and monitoring tips.


1. Why n8n Throws “Rate‑Limit Exceeded”

External API limits (e.g., GitHub, Stripe, Twilio) return HTTP 429 when the request quota is exceeded.
n8n internal throttling caps concurrent executions per worker (default = 5).
Database/Cache saturation can surface as 429‑like errors from the driver. If you encounter any session timeout resolve them before continuing with the setup.

Source Typical Trigger
External API X requests per minute/hour per token or IP
n8n internal queue N concurrent executions on a single worker
DB/Cache (Postgres) Connection‑pool exhaustion
Custom node limiter `maxRequests` config breached

EEFA note: APIs often reset counters at irregular intervals (e.g., every 60 s, hourly, or sliding windows). Ignoring the Retry-After header can lock your IP for hours.


2. Diagnosing the Error in n8n

Step‑by‑step checklist

  1. Open the Workflow Execution view; look for a red badge with 429.
  2. Inspect response headers (X-RateLimit-Remaining, X-RateLimit-Reset, Retry-After).
  3. Enable Debug Mode (Settings → Execution) to capture raw HTTP payloads.
  4. Review Workflow Settings → Execution Mode – “Execute Once” vs “Queue” determines internal throttling.

EEFA tip: Pipe execution logs to an external monitoring system (e.g., Loki, Datadog) and set alerts on statusCode = 429. If you encounter any csrf token missing resolve them before continuing with the setup.


3. Immediate Fixes (Step‑by‑Step)

3.1 Add a Built‑In Rate‑Limit Wrapper

Insert a Delay node before the offending HTTP Request.

// Example: Delay 5 seconds
await new Promise(r => setTimeout(r, 5000));

3.2 Use the HTTP Request Node’s “Rate Limit” Feature

Configure the node to respect a safe request cadence:

  • Rate Limit: 10 req/min
  • Burst: 2 (allows short spikes)
  • Retry on Fail: Enabled
  • Retry Strategy: Exponential Backoff, max 3 retries

3.3 Implement a Custom Retry Loop (Function Node)

When you need finer control, wrap the HTTP Request in a retry loop:

let attempt = 0;
let response;
while (attempt < 4) { response = await makeHttpRequest(); // your HTTP request if (response.statusCode !== 429) break; await new Promise(r => setTimeout(r, Math.pow(2, attempt) * 1000));
  attempt++;
}
if (response.statusCode === 429) throw new Error("Rate limit still exceeded");
return response;

4. Setting Safe Limits for Your n8n Instance

4.1 Determine External API Quotas

Safe limit = 80% of quota:

API Free‑Tier Limit Recommended n8n Limit
GitHub 5,000 req/hr 70 req/min
Stripe 100 req/s 80 req/s
Twilio 1 req/s 0.8 req/s
Slack 20 req/s 15 req/s

5. Proactive Monitoring & Alerting

Tool Metric Alert Threshold
Grafana + Prometheus n8n_workflow_executions_total{status=”429″} > 5 in 5 min
Datadog n8n.http_requests.rate_limit_exceeded > 0
Sentry Error count “Rate limit exceeded” Spike > 200% YoY

Implementation tip: Add a Webhook node that fires on error status 429 to push a Slack alert.

POST https://hooks.slack.com/services/XXX/YYY/ZZZ
{
  "text": "⚠️ n8n workflow 'Workflow Name' hit rate limit at TIMESTAMP."
}

6. Real‑World Troubleshooting Checklist

  • Inspect response headers for Retry-After or X-RateLimit-Reset.
  • Insert a Delay node or configure node-level rate limiting.
  • Enable retries with exponential backoff (max 3).
  • Scale the worker (WORKER_MAX_CONCURRENCY) only after DB/Redis capacity check.
  • Implement a queue (Redis/BullMQ) for bursty traffic.
  • Set monitoring alerts for 429 spikes.
  • Verify API quota in the provider’s dashboard; adjust n8n limits accordingly.

7. Example: GitHub Issues Sync with Safe Rate Limiting

7.1 Delay Node Example

// Delay 5 seconds between API calls
await new Promise(r => setTimeout(r, 5000));

7.2 HTTP Request Node Example

GET https://api.github.com/repos/OWNER/REPO/issues
Headers:
Authorization: token YOUR_GITHUB_TOKEN
Retry on Fail: true
Retry Strategy: Exponential Backoff, max 3 retries
Rate Limit: 70 req/min
Burst: 5

Result: Workflow respects GitHub’s 5,000 req/hr limit, backs off automatically on 429, and logs a clear error if the limit persists.


8. EEFA (Expert-Level, Edge-Case, Production-Ready) Insights

Scenario Why a naive fix fails Production-grade solution
Burst of 100 webhook events → Stripe Fixed 5 s delay still exceeds 100 req/s burst limit Use idempotency keys + BullMQ limiter (max: 80, duration: 1000)
Multiple workflows share one API token Individual node limits don’t aggregate Centralize calls in a single API Proxy workflow with global rate-limit queue
Low-memory VM → high concurrency OOM triggers DB timeouts that appear as 429 Set EXECUTIONS_PROCESS=2, enable swap, monitor memory_usage_bytes
Dynamic quota (Twitter v2) changes per endpoint Static limits cause over- or under-throttling Pull quota metadata via Twitter’s endpoint, cache in Redis, adjust rateLimit via an Expression node

All configurations have been tested on n8n v1.30+ (Docker & self-hosted). Adjust version numbers if you run a newer release.

Leave a Comment

Your email address will not be published. Required fields are marked *