n8n Duplicate Record Errors

Step by Step Guide to solve n8n Duplicate Record Error

 

 


 

Who this is for: n8n developers and automation engineers who need reliable handling of duplicate‑record (HTTP 409) responses from any API. We cover this in detail in the n8n API Integration Errors Guide.


Quick Diagnosis

When an n8n workflow receives “Duplicate record” (HTTP 409) from an API, enable Continue On Fail on the creating node, then route the error to a small Function or Set node that flags the duplicate and decides whether to skip, upsert, or merge.

Quick fix (one‑off duplicate):

{
  "error": {
    "continueOnFail": true,
    "errorMessage": "Duplicate record – ignored"
  }
}

Result: the workflow continues without aborting, and you can log the skipped ID for later audit.


1️⃣ Why the Duplicate Record Error Happens in n8n

Root Cause Typical HTTP Code Example API Message
Unique‑key violation (e.g., email, SKU) 409 Conflict “Duplicate record: email already exists”
Idempotency token reuse 400 Bad Request “Duplicate request – token already used”
“Create if not exists” disabled in connector 422 Unprocessable Entity “Record already exists”
Race condition (parallel nodes creating the same entity) 409 Conflict “Duplicate record”

Key point: n8n forwards the upstream HTTP response; the duplicate originates from the downstream service (Google Sheets, Salesforce, etc.). Identifying the exact status code and payload is the first step to a robust fix.


2️⃣ Detecting the Duplicate Error in a Workflow

Micro‑summary – Capture the error payload, flag duplicates, and use the flag for conditional routing.

  1. Enable “Continue On Fail” on the node that creates the record (e.g., Google Sheets → Append, Salesforce → Create).
  2. Add an “Error Trigger” node to catch the error payload.
  3. Inspect the response in a Function node.

Function node – flag 409 errors (first 4 lines)

// Check if the error is a 409 duplicate
if (items[0].json.error?.response?.statusCode === 409) {
  return [{ json: { isDuplicate: true, original: items[0].json } }];
}

Function node – pass‑through for non‑duplicates (remaining lines)

return items;

The isDuplicate flag can now drive an IF node that separates the normal and duplicate paths.


3️⃣ Strategies to Handle Duplicates

Strategy When to Use n8n Implementation
Skip & Log One‑off duplicates, audit‑only `Continue On Fail` + Set node to write ID to a log (Google Sheet, DB, etc.)
Upsert (Update if Exists) Desired state must be idempotent Use an Update node after an IF that checks isDuplicate
Merge / Patch Partial data updates only Function node to merge newData with existingRecord, then PATCH
Queue for Review Business logic requires manual validation Send Email or Create Jira Ticket with duplicate details
Retry with Idempotency Key Duplicate caused by transient race Generate a UUID per record, store it, and reuse on retry

Example: Upsert with Salesforce

Micro‑summary – Detect a 409, extract the existing record ID, then update it.

Function node – extract existing ID (first 4 lines)

if (items[0].json.error?.response?.statusCode === 409) {
  // Salesforce returns the existing ID in the error body
  const existingId = items[0].json.error.response.body[0].attributes.id;
  return [{ json: { recordId: existingId } }];
}

Function node – no duplicate (remaining line)

return [];

Pass recordId to a Salesforce Update node using the expression {{ $json.recordId }}.


4️⃣ Step‑by‑Step Workflow Blueprint (Generic API)

Micro‑summary – A reusable pattern that works for any HTTP API returning a duplicate error.

  1. Trigger – e.g., Webhook receiving a new record.
  2. Set – map incoming fields to the API payload.
  3. HTTP Request – POST /resources with Continue On Fail.
  4. Error Branch – connect the Error output to a Function node that flags statusCode === 409.
  5. IF – {{ $json.isDuplicate === true }}True: run the Upsert path; False: end.
  6. Upsert Path –
    1. HTTP Request (GET) – fetch the existing resource using the unique key.
    2. Function – merge incoming data with fetched data (4‑5 lines).
    3. HTTP Request (PATCH) – update the resource.
  7. Log – write the action (skip / upsert) to a log table for audit.

EEFA Note – In production, always rate‑limit the GET/PATCH calls and add exponential back‑off to avoid cascading failures when the upstream API throttles.


5️⃣ Troubleshooting Checklist

  • Status Code – Confirm it is 409 (or provider‑specific duplicate code).
  • Error Payload – Does it contain the existing record ID? If not, add a *search* step first.
  • Idempotency – Are you sending a static idempotency key? Duplicate errors may be false positives.
  • Parallelism – Are multiple workflow executions creating the same record concurrently? Consider a **Mutex** pattern (e.g., Redis lock).
  • Connector Settings – Some n8n nodes have a built‑in “Create or Update” toggle – enable it if available.
  • Logging – Verify that duplicate IDs are persisted for later analysis.

6️⃣ Real‑World Production Tips (EEFA)

  1. Never silent‑skip – always log a duplicate skip; silent failures corrupt downstream analytics.
  2. Design for idempotency – use natural keys or UUIDs so repeated calls are safe.
  3. Transactional safety – if duplicate detection occurs inside a multi‑step transaction (e.g., create record then upload attachment), wrap the whole sequence in a **Try / Catch** pattern using an **Error Trigger** node.
  4. Compliance – for GDPR‑related records, ensure that a duplicate skip does not violate “right to be forgotten” – audit the log and purge if needed.

Conclusion

Duplicate‑record errors are a predictable part of API‑driven automation. By enabling Continue On Fail, flagging 409 responses with a tiny Function node, and routing the result through a clear Skip, Upsert, or Merge strategy, you keep n8n workflows resilient and auditable. Logging every duplicate ensures visibility, while idempotent design and proper rate‑limiting protect production stability. Apply the generic blueprint above to any API, adapt the small code snippets to your connector, and your workflows will gracefully survive duplicate rejections without silent data loss.

Leave a Comment

Your email address will not be published. Required fields are marked *