n8n Merge Node Conflict Error

Step by Step Guide to solve n8n Merge Node Conflict Error

 


 

Who this is for: n8n developers who use the Merge node to combine two or more data streams and need a reliable way to handle duplicate keys. We cover this in detail in the n8n Node Specific Errors Guide.


Quick Diagnosis

The “Merge node conflict error” appears when two input items share the same key field while the node is in “Merge By” mode and the Conflict handling option is set to “Throw error on conflict”.

Quick fix

  1. Open the Merge node → Mode = Merge By.
  2. Choose a Key Field that is guaranteed to be unique (e.g., id, uuid).
  3. If uniqueness cannot be guaranteed, change Conflict handling to “Overwrite” or “Skip”.
  4. (Optional) Add a Set node before the Merge to generate a unique composite key: {{$json["id"]}}_{{$index}}.

Re‑run the workflow – the error disappears.


1️⃣ Why the Merge node throws a conflict error

If you encounter any n8n slack node rate limit resolve them before continuing with the setup.

 

Situation Node behaviour Result
Two items share the same Key Field while Mode = Merge By The node tries to merge the items into a single record Default Conflict handling = Throw error → workflow stops
Mode = Append with duplicate items Items are simply concatenated No conflict check – duplicates remain
Mode = Keep Key with non‑unique keys First occurrence is kept, later ones ignored No error, but data may be silently dropped

EEFA note: The error is intentional – it protects you from unintentionally overwriting data.


2️⃣ Merge node modes & conflict handling

Mode What it does
Append Concatenates the two input arrays (no conflict check).
Merge By Merges items that share the same Key Field.
Keep Key Keeps the first occurrence of each key, discarding later ones.
Conflict handling (only for Merge By) Behaviour
Throw error (default) Stops execution on duplicate keys.
Overwrite Newer item replaces the older one.
Skip Keeps the first item, discards duplicates.

Best practice: Use Overwrite only when you have a deterministic rule (e.g., latest timestamp). Otherwise prefer Skip and log the conflict. If you encounter any n8n google sheets node rate limit resolve them before continuing with the setup.


3️⃣ Pre‑run checklist (micro‑summary)

Before you execute the workflow, verify the following items to avoid conflicts.

  • Key field is selected and truly unique across all inputs.
  • Conflict handling matches your business rule (Overwrite / Skip).
  • No null/undefined values in the key field (add a fallback if needed).
  • Input batch size respects server limits – use SplitInBatches for large sets.
  • Logging: capture skipped items for audit.
- [ ] Unique key field selected  
- [ ] Conflict handling set appropriately  
- [ ] No null keys (add fallback)  
- [ ] Batch size within limits  
- [ ] Skipped items logged

4️⃣ Step‑by‑step resolution guide

4.1 Identify the conflicting key

Add a Set node that copies the candidate key to a visible field (conflictKey).

{
  "name": "Expose Key",
  "type": "n8n-nodes-base.set",
  "parameters": {
    "values": {
      "string": [
        {
          "name": "conflictKey",
          "value": "={{$json[\"id\"]}}"
        }
      ]
    }
  }
}

Run the workflow, open Execution → View Execution, and filter on conflictKey. Duplicate values are the culprits.

4.2 Ensure a truly unique key

Option A – Use an existing unique identifier (e.g., uuid).

Option B – Create a composite key with a Set node:

{{ $json["email"] + "_" + $json["createdAt"] }}

Reference this new field (compositeKey) in the Merge node’s Key Field.

4.3 Adjust conflict handling

Open the Merge node and select the appropriate option:

{
  "mode": "mergeBy",
  "keyField": "uid",
  "conflictHandling": "skip"   // or "overwrite"
}
  • Skip – keeps the first occurrence, discards later duplicates.
  • Overwrite – newer item replaces the older one (use with a deterministic rule).

4.4 (Optional) Split large batches before merging

When processing thousands of records, split the inputs to stay within memory limits:

{
  "name": "Split Large Batch",
  "type": "n8n-nodes-base.splitInBatches",
  "parameters": { "batchSize": 500 }
}

Merge each batch individually, then concatenate the results with a Merge (Append) node.

4.5 Minimal working example (broken into bite‑size snippets)

4.5.1 Fetch two data streams

{
  "name": "HTTP Get Users",
  "type": "n8n-nodes-base.httpRequest",
  "parameters": { "url": "https://api.example.com/users" }
}
{
  "name": "HTTP Get Updates",
  "type": "n8n-nodes-base.httpRequest",
  "parameters": { "url": "https://api.example.com/updates" }
}

4.5.2 Normalise the key field for each stream

{
  "name": "Set UID Users",
  "type": "n8n-nodes-base.set",
  "parameters": {
    "values": {
      "string": [{ "name": "uid", "value": "={{$json[\"id\"]}}" }]
    }
  }
}
{
  "name": "Set UID Updates",
  "type": "n8n-nodes-base.set",
  "parameters": {
    "values": {
      "string": [{ "name": "uid", "value": "={{$json[\"userId\"]}}" }]
    }
  }
}

4.5.3 Merge on the guaranteed‑unique uid

{
  "name": "Merge Users & Updates",
  "type": "n8n-nodes-base.merge",
  "parameters": {
    "mode": "mergeBy",
    "keyField": "uid",
    "conflictHandling": "overwrite"
  }
}

4.5.4 Connect the nodes (simplified)

{
  "connections": {
    "HTTP Get Users": { "main": [[{ "node": "Set UID Users", "type": "main", "index": 0 }]] },
    "HTTP Get Updates": { "main": [[{ "node": "Set UID Updates", "type": "main", "index": 0 }]] },
    "Set UID Users": { "main": [[{ "node": "Merge Users & Updates", "type": "main", "index": 0 }]] },
    "Set UID Updates": { "main": [[{ "node": "Merge Users & Updates", "type": "main", "index": 1 }]] }
  }
}

Running this workflow merges the two streams on a guaranteed‑unique uid and overwrites any accidental duplicates.


5️⃣ Common conflict scenarios (quick reference)

# Scenario Typical cause Recommended fix
1 Duplicate id from two API calls Both APIs share the same primary‑key space Create a composite key (apiName_id).
2 Missing key field in one input Field name typo or null value Add a Set node with a fallback: {{ $json["id"] || $json["fallbackId"] }}.
3 Large batch + duplicate keys Batch > 10 k items triggers memory pressure Use SplitInBatches + Skip handling; log duplicates.
4 Function node rewrites keys Custom mapping overwrites original keys Preserve original key in a new field (originalId) and merge on that.

6️⃣ Production‑grade EEFA recommendations

  1. Idempotency – Persist the generated unique key (DB, KV store) so repeated runs merge deterministically.
  2. Observability – Emit conflict metrics (total items, duplicates, overwritten) to a monitoring system (Grafana, Datadog).
  3. Rollback safety – When using Overwrite, snapshot the pre‑merge dataset to a temporary variable or external storage.
  4. Security – If the key contains PII (e.g., email), hash it before use: {{ $json["email"] | md5 }}.

7️⃣ Frequently asked questions

Q: Can I merge more than two input streams?

A: Yes. Connect additional inputs to the same Merge node; they are processed sequentially with the same key/conflict settings.
If you encounter any n8n split-in-batches node limit exceeded resolve them before continuing with the setup.

Q: Does the Merge node preserve the order of items?

A: In Append mode, order is preserved. In Merge By, the order follows the first appearance of each unique key.

Q: How do I debug a conflict without stopping the whole workflow?

A: Set Conflict handling to Skip, then add a Function node after the Merge to capture skipped items:

return [{ json: { skipped: $node["Merge"].json.skipped } }];

Log or route this output for audit.


Conclusion

The Merge node’s conflict error is a safety net that fires when duplicate keys appear in Merge By mode. By ensuring a truly unique key (or a deterministic composite key), selecting the appropriate conflict‑handling strategy, and optionally batching large inputs, you can eliminate the error and keep your n8n pipelines reliable at scale. Apply the checklist, monitor conflicts in production, and you’ll have a robust, idempotent merge process that works across self‑hosted, Docker, and n8n.cloud deployments.

Leave a Comment

Your email address will not be published. Required fields are marked *