
Step by Step Guide to solve n8n Batch Request Failure
Who this is for: n8n developers who run bulk operations (e.g., Google Sheets, Salesforce, generic REST) and need a reliable way to handle partial or complete batch failures. We cover this in detail in the n8n API Integration Errors Guide.
Quick Diagnosis
- Turn Continue On Fail on the node that sends the batch.
- Insert a SplitInBatches node before the request and set the batch size to the service‑specific limit (≤ 500 items for most connectors).
- Open the Execution view, inspect the
errorobject – it containsstatusCode,message, and adetailsarray with the index of each failed item. - Use a Function node to rebuild a new batch from
error.detailsand retry only the problematic records.
Result: the workflow resumes, retries only the bad items, and finishes without a hard stop.
1. Why n8n batch requests fail
If you encounter any n8n conditional branch errors resolve them before continuing with the setup.
| Failure type | Typical cause | Service‑specific limit* | n8n symptom |
|---|---|---|---|
| Partial failure | One or more items violate validation rules (e.g., duplicate email, missing required field). | Varies (Google Sheets = 500 rows, Salesforce = 200 records) | Execution stops at the node; error.details lists failed indexes. |
| Complete failure | Exceeded rate‑limit, auth token expired, payload size > max bytes. | Varies (AWS = 10 MB, generic REST = 5 MB) | Node returns statusCode = 429/401/413 and no details. |
| Network glitch | Temporary DNS timeout or TLS handshake error. | N/A | Generic “Request failed” error; a retry may succeed. |
*Limits are documented per connector; see the sibling pages Google Sheets batch limits and Salesforce Bulk API limits for exact numbers.
EEFA note
- Rate‑limit bursts can cause a full failure even when individual items are valid. In production, always implement exponential back‑off (e.g., 500 ms → 2 s → 5 s).
- Idempotency: When retrying failed items, ensure the target API supports idempotent writes (e.g., upsert with external ID) to avoid duplicate records.
2. Diagnosing the failure in n8n
If you encounter any n8n webhook response errors resolve them before continuing with the setup.
Step‑by‑step
- Open the Execution view → click the failed node → expand the Error tab.
- Locate the
statusCodeand the optionaldetailsarray.
Example error payload
{
"statusCode": 400,
"message": "Batch request partially failed",
"details": [
{ "index": 3, "error": "Email format invalid" },
{ "index": 7, "error": "Missing required field: phone" }
]
}
Interpret the payload
| Condition | Meaning |
|---|---|
details present |
Partial failure – only some items broke. |
Only statusCode |
Complete failure – rate‑limit, auth, payload size, etc. |
TL;DR diagnostic checklist
| Steps | Check |
|---|---|
| ✅ | error.details present? → Partial failure. |
| ✅ | statusCode = 429? → Rate‑limit; add retry logic. |
| ✅ | statusCode = 401/403? → Token expired; see *n8n authentication failure error*. |
| ✅ | Payload size > service limit? → Split batches (see next section). |
3. Splitting batches correctly
3.1 Using the **SplitInBatches** node
- Add a SplitInBatches node after the data source (e.g., *Google Sheets → Get All Rows*).
- Set **Batch Size** to the connector’s maximum (e.g.,
500for Google Sheets). - Connect the node’s output to the batch‑capable request node (HTTP Request, Salesforce Bulk, etc.).
Result – Each downstream request receives at most the allowed number of items, preventing payload‑size errors.
3.2 Re‑batching only the failed items
After a partial failure, a **Function** node can extract the problematic indexes and rebuild a new batch.
Extract failed indexes (4 lines)
// Pull the array of failed indexes from the error object return $json.error?.details?.map(d => d.index) || [];
Map indexes back to original items (5 lines)
// Re‑create items for the failed indexes
const failed = $node["Extract Failed Indexes"].json; // array of indexes
const allItems = $items("SplitInBatches"); // original batch items
return failed.map(i => allItems[i]);
Feed this output into a second request node (with **Continue On Fail** enabled) to retry only the bad records.
4. Implementing a robust retry strategy
4.1 Built‑in “Retry on Fail” settings
| Setting | Recommended value for batch ops |
|---|---|
| Maximum Retries | 5 |
| Delay Between Retries | exponential (500 ms → 2 s → 5 s) |
| Continue On Fail | true (so the workflow reaches the retry branch) |
4.2 Custom retry workflow (textual description)
- Start → SplitInBatches → Batch Request.
- On Success, flow to End.
- On Fail, route to an Error Handler that checks the retry count.
- If retries < 5, wait using an exponential back‑off, then re‑invoke the batch request.
- If retries ≥ 5, send a Slack alert (or other notification) and terminate.
EEFA note – Never use unlimited retries – this can hammer the downstream API and trigger IP bans. Log each retry (add a timestamp with a **Set** node) so you can audit how many attempts were needed per batch.
5. Real‑world example: Updating 10 000 Salesforce contacts via Bulk API
Below are three focused snippets that together compose the full workflow.
5.1 Retrieve records (4 lines)
{
"name": "Get Records",
"type": "n8n-nodes-base.salesforce",
"operation": "search",
"parameters": { "query": "SELECT Id, Email FROM Contact LIMIT 10000" }
}
5.2 Split into compliant batches (3 lines)
{
"name": "Split",
"type": "n8n-nodes-base.splitInBatches",
"batchSize": 200
}
5.3 Bulk update with retry support (5 lines)
{
"name": "Bulk Update",
"type": "n8n-nodes-base.salesforce",
"operation": "bulkUpdate",
"continueOnFail": true,
"parameters": { "object": "Contact", "updateKey": "Id" }
}
5.4 Handle partial failures (4 lines)
// Re‑batch only failed items from the previous bulk update
const failed = $json.error?.details?.map(d => d.index) || [];
return failed.map(i => $items('Split')[i]);
Key take‑aways
- Batch size = 200 respects Salesforce Bulk API limits.
- Continue On Fail prevents the entire workflow from stopping on a single bad record.
- The Function node rebuilds a new batch from
error.details, allowing a focused retry without duplicating successful records.
6. Common pitfalls & how to avoid them
| Pitfall | Symptom | Fix |
|---|---|---|
| Batch size > service limit | Immediate 413 “Payload Too Large”. | Use SplitInBatches with the correct limit. |
Missing continueOnFail |
Workflow halts on first bad record. | Enable **Continue On Fail** for the batch node. |
| Retry loop without back‑off | API returns 429 repeatedly → IP ban. | Insert a **Wait** node with exponential delay. |
| Re‑sending the whole batch | Duplicate records, wasted quota. | Re‑batch only the indexes listed in error.details. |
| Assuming all errors are recoverable | Some errors are permanent (e.g., validation). | After 3 retries, move failed items to a “Dead‑Letter” log (e.g., Google Sheets). |
7. EEFA – Production‑grade checklist
- Validate batch size against the connector’s documentation (see sibling pages).
- Enable idempotent upserts where possible (external ID,
INSERT_OR_UPDATE). - Log every batch attempt (batch ID, size, success count, error count).
- Monitor rate‑limit headers (
X‑RateLimit‑Remaining) and trigger a back‑off when < 10. - Alert on persistent failures (e.g., > 3 retries) via Slack, email, or PagerDuty.
Conclusion
Batch request failures in n8n are almost always traceable to three root causes: exceeding service limits, hitting rate limits, or encountering item‑level validation errors. By splitting batches to respect connector limits, enabling Continue On Fail, and re‑batching only the failed items with a concise retry loop, you can turn a hard stop into a resilient, production‑grade workflow. Follow the EEFA checklist to monitor limits, log attempts, and alert on persistent issues, and your bulk integrations will run reliably at scale.



