Exactly-Once Execution in n8n

Step by Step Guide to solve n8n exactly once execution 
Step by Step Guide to solve n8n exactly once execution


Who this is for: Engineers building finance‑ or inventory‑critical automations in n8n that must avoid duplicate records. We cover this in detail in the Production‑Grade n8n Architecture.

In production duplicate records appear when a webhook retry fires after a brief network glitch.


Quick Diagnosis

  1. Persist a deterministic deduplication key (e.g., sourceId:timestamp) in a durable store (PostgreSQL, Redis, or n8n Data Store).
  2. Check‑before‑process – an If/Else node queries the store; if the key exists → skip, else → process.
  3. Claim the key atomically (INSERT … ON CONFLICT or Redis SETNX) so concurrent runs cannot insert the same key.
  4. Mark success only after the downstream action succeeds (e.g., after a Stripe charge).
  5. Route failures to a dead‑letter queue to avoid silent data loss.

Implement the pattern below and you’ll get exactly‑once semantics even if n8n restarts, retries, or scales horizontally.


1. Exactly‑Once vs. At‑Least‑Once in n8n

If you encounter any stateless vs stateful workflows n8n resolve them before continuing with the setup.

Model Guarantees n8n Default When it Breaks
At‑Least‑Once Every event processed ≥ 1 time; duplicates possible on retries. All built‑in triggers (Webhook, Cron, Polling) auto‑retry on failure. Network hiccup, timeout, or node error → retry → duplicate payload.
Exactly‑Once Each logical event processed once and only once. Requires explicit idempotency handling; n8n does not provide it out‑of‑the box. Missing dedup key, non‑transactional side‑effects, or race conditions.

EEFA note: Exactly‑once is a promise you enforce with external state. n8n’s internal queue is volatile, so you need a durable store. Most teams notice at‑least‑once behaviour on the first retry.


2. Choosing a Durable Deduplication Store

Store Pros Cons Ideal Use‑Case
PostgreSQL (or any RDBMS) Strong ACID guarantees; SELECT … FOR UPDATE available. Adds a DB round‑trip latency. Pipelines that already use a relational DB.
Redis (SETNX / Lua) In‑memory speed; atomic claim. Data loss if persistence (AOF) isn’t enabled. High‑throughput webhook ingestion.
n8n Data Store (v0.210+) No external dependency; UI‑driven. Limited payload size; no native transactions. Low‑volume internal automations.

Implementation tip: Build a *composite key* from immutable source fields, e.g., order:12345:2024-09-01T12:00:00Z. Store a simple flag (processed = true) and optionally a timestamp for audit.


3. Blueprint: Exactly‑Once Sub‑Workflow

If you encounter any n8n orchestration vs execution engine resolve them before continuing with the setup.

Purpose: A reusable sub‑workflow (ExactOnceTemplate) that you can embed via the *Execute Workflow* node.

3.1. Node Flow Overview

Step Node Role
1 Webhook Trigger Receives raw payload.
2 Set → dedupKey Computes a deterministic key.
3 Claim Key Atomically inserts the key (Postgres or Redis).
4 If/Else Skips if key already exists.
5 Business Logic Your core processing (e.g., create invoice).
6 Mark Completed Updates the store to completed.
7 Dead‑Letter Captures failures for later replay.

3.2. PostgreSQL – Table Definition

*Create the dedup table once (run outside the workflow).*

CREATE TABLE n8n_dedup (
  dedup_key   TEXT PRIMARY KEY,
  created_at  TIMESTAMP DEFAULT NOW(),
  status      TEXT NOT NULL DEFAULT 'claimed'   -- claimed | completed | failed
);

3.3. PostgreSQL – Claim the Key

*This query runs in the **Claim Key** node. It attempts to insert the key; if it already exists, nothing is inserted.*

INSERT INTO n8n_dedup (dedup_key)
VALUES ({{ $json.dedupKey }})
ON CONFLICT (dedup_key) DO NOTHING
RETURNING *;

Result > 0 rows → key was newly claimed → continue.
Result = 0 rows → key already present → skip processing.
– *At this point, regenerating the key is usually faster than chasing edge cases.*

3.4. Redis – Atomic Claim via SETNX

*If you prefer Redis, use an HTTP request to a proxy that runs SETNX. The response tells you whether you own the key.*

POST https://my-redis.example.com/setnx
Content-Type: application/json

{
  "key": "{{ $json.dedupKey }}",
  "value": "claimed",
  "expire": 86400   // 1‑day TTL
}

– **Response `1`** → key acquired, proceed.
– **Response `0`** → key exists, skip.
*EEFA tip:* Wrap the SETNX and TTL in a Lua script to guarantee atomicity.

3.5. If/Else – Decide Whether to Process

*The condition checks the length of the claim node’s output.*

{
  "value1": "={{$node[\"Claim Key\"].json.length}}",
  "operation": "greaterThan",
  "value2": 0
}

– **True** → key claimed → take the **Process** branch.
– **False** → duplicate → take the **Skip** branch.

3.6. Business Logic Example – Stripe Charge

*Only runs when the key is newly claimed.*

{
  "url": "https://api.stripe.com/v1/charges",
  "method": "POST",
  "headerAuth": {
    "name": "Authorization",
    "value": "Bearer {{ $env.STRIPE_SECRET_KEY }}"
  },
  "bodyParametersUi": {
    "parameter": [
      { "name": "amount", "value": "={{$json.amount * 100}}" },
      { "name": "currency", "value": "usd" },
      { "name": "source", "value": "={{$json.token}}" },
      { "name": "description", "value": "Order {{ $json.orderId }}" }
    ]
  },
  "options": {
    "jsonContentType": true,
    "idempotencyKey": "{{ $json.dedupKey }}"
  }
}

The Stripe Idempotency-Key mirrors our dedup key, providing a second line of defense.

3.7. Mark Completed – Update Store

*Runs after a successful Stripe response.*

UPDATE n8n_dedup
SET status = 'completed',
    external_id = '{{ $node["Stripe Charge"].json.id }}'
WHERE dedup_key = '{{ $json.dedupKey }}';

If the update fails, n8n’s error workflow will roll back the transaction.

3.8. Dead‑Letter Queue – Capture Failures

*If the Stripe call throws, forward the original payload and error details to a DLQ endpoint.*

{
  "url": "https://my-dlq.example.com/record",
  "method": "POST",
  "jsonParameters": true,
  "bodyParametersUi": {
    "parameter": [
      { "name": "payload", "value": "={{$json}}" },
      { "name": "error",   "value": "={{$node[\"Stripe Charge\"].error}}" }
    ]
  }
}

In practice, failed payloads sit in a dead‑letter queue for a few hours before an engineer replays them.


4. Transactional Wrapper (Two‑Phase Commit)

When the downstream API is not idempotent, wrap the whole sequence in a DB transaction.

Phase Action
1️⃣ Reserve Insert dedup key with status claimed.
2️⃣ Execute Call external API (e.g., Stripe).
3️⃣ Commit Update row to completed and store external ID.
4️⃣ Rollback (on failure) Set status to failed; optional compensation logic.

**If your downstream service can’t guarantee idempotency, you need the extra safety net of a two‑phase commit.**

**Postgres transaction example (single node using Execute Query):**

BEGIN;
INSERT INTO n8n_dedup (dedup_key, status)
VALUES ({{ $json.dedupKey }}, 'claimed')
ON CONFLICT (dedup_key) DO UPDATE
  SET status = 'claimed'
  WHERE status <> 'completed';
-- Assume $node["Stripe Charge"].json.id holds the external ID
UPDATE n8n_dedup
SET status = 'completed', external_id = '{{ $node["Stripe Charge"].json.id }}'
WHERE dedup_key = '{{ $json.dedupKey }}';
COMMIT;

If any step throws, n8n’s error handling aborts the transaction automatically. If you encounter any event driven vs batch n8n resolve them before continuing with the setup.


5. Idempotent Design Checklist

Steps Item Reason
1 Deterministic dedup key (source‑ID + timestamp) Guarantees identical events map to the same key.
2 Atomic claim (INSERT … ON CONFLICT or SETNX) Prevents two workers from processing the same event.
3 Transaction around side‑effects Ensures “all‑or‑nothing” state.
4 Idempotent downstream APIs (support Idempotency-Key) Even a stray duplicate is ignored by the API.
5 TTL on dedup entries (24‑48 h) Cleans up storage while still protecting against retries.
6 Dead‑letter queue Enables manual replay without data loss.
7 Monitoring & alerts (duplicate‑skip counter) Detects unexpected high duplicate rates.
8 Race‑condition testing (parallel webhook calls) Validates atomic claim logic.

Copy‑paste this table into your internal docs for quick reference.


6. Real‑World Pitfalls & Mitigations

Pitfall Symptom Fix
Non‑deterministic key (e.g., random UUID) Every run looks new → duplicates. Derive key from immutable source fields.
Missing TTL Dedup table grows indefinitely, slowing queries. Set an expiration (`expire` in Redis or `ON DELETE` policy in Postgres).
Side‑effect before claim API call succeeds, claim fails → duplicate external action. Claim first, then call the external service.
Concurrent workers without locking Two workers insert same key → both succeed. Use DB‑level atomic INSERT … ON CONFLICT or Redis SETNX.
Ignoring API idempotency External service creates duplicate records. Pass the same Idempotency-Key header to the API.
Error path leaves claim as “claimed” Subsequent retries are blocked forever. On error, set status to failed or delete the key after a back‑off.

A quick alert on duplicate skips usually catches mis‑configurations before they affect customers.


7. Monitoring & Alerting Blueprint

7.1. Prometheus Alert for Duplicate Skips

- alert: N8N_DuplicateSkipped
  expr: sum(rate(n8n_workflow_executions_total{status="skipped",reason="duplicate"}[5m])) > 5
  for: 2m
  labels:
    severity: warning
  annotations:
    summary: "High rate of duplicate order skips in n8n"
    description: "More than 5 duplicate skips per minute detected. Verify dedup store health."

7.2. Grafana Panels

– **Completed vs. Skipped Rate**: rate(n8n_workflow_executions_total{status="completed"}[1m]) vs. rate(...{status="skipped"}).
– **Error Spike**: rate(n8n_workflow_executions_total{status="error"}[5m]).

7.3. Slack Notification

Configure a webhook that posts on the N8N_DuplicateSkipped alert to the **#n8n-reliability** channel.


Conclusion

By persisting a deterministic deduplication key, claiming it atomically, and wrapping every side‑effect in a transaction, you turn any n8n workflow from “at‑least‑once” into exactly‑once. The pattern works with PostgreSQL, Redis, or n8n’s built‑in Data Store, scales horizontally, and survives restarts—delivering the data‑consistency guarantees required by finance, inventory, and compliance‑critical automations. Implement it once, and your workflows will reliably process each logical event once and only once.

Leave a Comment

Your email address will not be published. Required fields are marked *