
Step by Step Guide to solve n8n Split In Batches Node Limit Exceeded Error
Who this is for: n8n users who hit the “Limit Exceeded” error on the Split In Batches node, whether on self‑hosted instances or n8n.cloud. We cover this in detail in the n8n Node Specific Errors Guide.
Quick Diagnosis
The error occurs when the node tries to create more batches than the workflow’s Maximum executions (default 10 000).
Fix in 3 steps
- Lower the Batch Size – ensure
ceil(items.length / batchSize) ≤ 10 000. - Enable “Continue On Fail” *or* offload each batch to a child workflow with an Execute Workflow node.
- Raise the global execution limit (self‑hosted only) after confirming the host can handle the extra load.
Run the workflow again – the error disappears.
1. What Triggers the “Limit Exceeded” Error?
If you encounter any n8n google sheets node rate limit resolve them before continuing with the setup.
| Trigger | Why it Happens | Typical Symptom |
|---|---|---|
| Batch size too small vs. input array | ceil(items.length / batchSize) exceeds the workflow’s Maximum executions (default 10 000). |
Error: Limit exceeded – number of executions would exceed the configured limit (10000) |
| Very large input array (≥ 1 M items) | Even a moderate batch size can still generate > 10 000 batches. | Same error, logs show “total items: 1 200 000”. |
| Split In Batches inside a loop | Each loop iteration adds its own batch count, quickly multiplying the total. | “Recursive execution limit reached” after a few iterations. |
| Self‑hosted instance with a lowered global limit | Admin set EXECUTIONS_MAX (e.g., 5000) in .env. |
Error message mentions the custom limit value. |
EEFA note: Raising the global limit without provisioning more CPU/RAM can cause out‑of‑memory crashes. Adjust limits only on servers you control and after load‑testing.
2. Step‑by‑Step Troubleshooting
If you encounter any n8n slack node rate limit resolve them before continuing with the setup.
2.1. Inspect the Input Size
Add a Set node before the Split In Batches node to count items.
{
"parameters": {
"values": {
"number": [
{
"name": "itemCount",
"value": "={{ $json[\"items\"]?.length || 0 }}"
}
]
}
},
"name": "Count Items",
"type": "n8n-nodes-base.set"
}
Run the workflow and note the itemCount value in the execution log.
2.2. Lower the Batch Size
Calculate a safe batch size:
safeBatchSize = floor( maxExecutions / ceil(itemCount / batchSize) )
Example – itemCount = 1 200 000, default maxExecutions = 10 000
requiredBatches = ceil(1 200 000 / batchSize) batchSize ≤ floor(1 200 000 / 10 000) = 120
Set Batch Size to 120 (or lower) in the Split In Batches node and re‑run.
2.3. Use “Continue On Fail” or a Child Workflow
When you can’t shrink the batch size (downstream API needs ≤ 500 items per request):
- Enable “Continue On Fail” on the Split In Batches node – the workflow continues even if a batch fails.
- Add an Execute Workflow node to process each batch in a separate child workflow.
{
"parameters": {
"workflowId": "123", // ID of the child workflow
"runOncePerItem": true,
"batchSize": 1
},
"name": "Process Batch",
"type": "n8n-nodes-base.executeWorkflow"
}
The child workflow receives a single batch, performs the heavy work, and ends, effectively resetting the execution counter for each batch.
2.4. Adjust the Global Execution Limit (Self‑Hosted Only)
If you control the server and have verified sufficient resources:
- Edit the
.env(or Docker env) file. - Set a higher limit, e.g.:
EXECUTIONS_MAX=20000
Restart the n8n container/service.
EEFA warning: Increasing
EXECUTIONS_MAXdoes not increase memory or CPU. Monitor the node process (RSS) – spikes > 2 GB indicate you’re approaching the host limit.
3. Example Workflow – From Large CSV to Batched API Calls
Below is a minimal, production‑ready workflow that reads a CSV, splits it into safe batches, and calls a rate‑limited API.
3.1. Read the CSV file
{
"parameters": {
"filePath": "/data/large-dataset.csv",
"options": {}
},
"name": "Read CSV",
"type": "n8n-nodes-base.readBinaryFile"
}
3.2. Parse the CSV into JSON
{
"parameters": {
"delimiter": ",",
"headerRow": true
},
"name": "Parse CSV",
"type": "n8n-nodes-base.csvParse"
}
3.3. Split into safe batches (120 items each)
{
"parameters": {
"batchSize": 120 // calculated safe size
},
"name": "Split In Batches",
"type": "n8n-nodes-base.splitInBatches"
}
3.4. Send each batch to the API
{
"parameters": {
"url": "https://api.example.com/ingest",
"method": "POST",
"jsonParameters": true,
"options": {
"bodyContent": "={{ $json[\"batch\"] }}",
"retryOnFail": true,
"maxTries": 3
}
},
"name": "HTTP Request",
"type": "n8n-nodes-base.httpRequest"
}
3.5. Connections (concise view)
{
"Read CSV": { "main": [[{ "node": "Parse CSV", "type": "main", "index": 0 }]] },
"Parse CSV": { "main": [[{ "node": "Split In Batches", "type": "main", "index": 0 }]] },
"Split In Batches": { "main": [[{ "node": "HTTP Request", "type": "main", "index": 0 }]] }
}
Key takeaways
- Batch Size = 120 respects the default 10 000‑execution limit (≈ 8 333 batches).
- Retry on Fail prevents a single bad batch from aborting the whole run.
- The workflow stays safely under the execution ceiling while processing 1 200 000 rows.
4. Checklist – Prevent “Limit Exceeded” Before It Happens
If you encounter any n8n merge node conflict error resolve them before continuing.
- [ ] Count items before splitting (Set node).
- [ ] Calculate safe batch size using
maxExecutions / ceil(itemCount / batchSize). - [ ] Set Batch Size to the calculated safe value or lower.
- [ ] Enable “Continue On Fail” if occasional batch errors are acceptable.
- [ ] Consider a child workflow for massive datasets (> 5 M items).
- [ ] Validate server limits (
EXECUTIONS_MAX) before raising them. - [ ] Monitor memory during the first few runs (n8n UI → Execution → Resource usage).
5. Frequently Asked Questions
| Question | Answer |
|---|---|
| Can I set Batch Size to “0” to process all items at once? | No. A batch size of 0 disables batching and forces the node to pass the entire array downstream, instantly exceeding the execution limit for large inputs. |
| Why does the error sometimes mention “Recursive execution limit reached”? | The node is inside a Loop (e.g., an “Iterate” node). Each loop iteration adds its own batch count. Break the loop into a separate workflow or increase the Maximum Loop Executions setting. |
| Is there a way to auto‑scale the limit on n8n.cloud? | n8n.cloud caps executions per workflow at 10 000 and does not expose a user‑adjustable setting. Use child workflows or reduce batch size. |
| My workflow runs fine locally but fails after deployment. | Production environments often have a lower EXECUTIONS_MAX set in the Docker env. Verify the limit via **Settings → Workflow Settings** or ask your admin. |
6. EEFA (Expert‑Level Fixes & Alerts)
- Memory‑Leak Guard – For > 2 M items, enable “Garbage Collection” in the node settings (available in n8n > 0.210). This forces V8 to free memory after each batch.
- Rate‑Limit Coordination – Pair Split In Batches with a Throttle node (
maxRequestsPerSecond) when the downstream API also enforces request‑per‑second limits. - Dynamic Batch Sizing – Compute batch size on‑the‑fly based on the current item count:
const maxExec = 10000; const items = $json["items"]?.length || 0; const safeSize = Math.max(1, Math.floor(items / maxExec)); return [{ json: { batchSize: safeSize } }];Feed
{{ $json.batchSize }}into the Split In Batches node via an Expression.
Conclusion
The “Limit Exceeded” error is a direct result of the Split In Batches node generating more executions than the workflow is allowed to run. By counting items, calculating a safe batch size, and optionally offloading work to child workflows or raising the global limit (self‑hosted only), you keep the execution count within bounds while still processing massive datasets efficiently. Apply the checklist before each large split, monitor memory, and you’ll avoid crashes in production environments.



