n8n race conditions with parallel executions – how to prevent duplicate processing

Step by Step Guide to solve n8n race conditions parallel executions 
Step by Step Guide to solve n8n race conditions parallel executions

Who this is for: n8n workflow engineers who run multi‑branch pipelines in production and need reliable strategies to avoid data loss, duplicate writes, or nondeterministic results. We cover this in detail in the n8n Production Failure Patterns Guide.


Fix Checklist

Step Action
1. Insert a distributed mutex (Redis lock, PostgreSQL advisory lock, or file‑system lock) before any shared write.
2 Use Execute Workflow with Run Once + Execute After to serialize critical sections when a lock is overkill.
3. Turn on Run Mode → Execute Once for the offending node (available from n8n v1.0).
4. If the lock cannot be obtained after 3 retries, abort the run and raise an alert.

Quick Diagnosis: Is Your Workflow Suffering from a Race Condition?

If you encounter any n8n production bugs not reproducible resolve them before continuing with the setup.

Symptom Typical Cause
Duplicate rows after a batch import Parallel branches inserting the same payload
Intermittent “resource busy” API errors (409/423) Concurrent POST/PUT calls to the same endpoint
Missing or partially updated files in S3 Parallel Write Binary File nodes targeting the same key
Random order of webhook‑triggered actions Webhook fires multiple times before the previous run finishes

First‑step check: Look for parallel Execute Workflow nodes, overlapping HTTP Request calls, or branches that write to the same external identifier.


Understanding n8n’s Parallel Execution Model

Summary: n8n creates an independent execution context for each branch after a split. By default there is no limit on node‑level concurrency, and multiple workflow runs can overlap globally. If you encounter any n8n cascading failures resolve them before continuing with the setup.

Concept n8n Setting Effect on Race Conditions
Node‑level concurrency Unlimited (each item spawns its own process) Increases contention on shared resources
Workflow‑level concurrency Global (multiple runs can overlap) Allows two runs to hit the same resource simultaneously
Execute Once (node) Run Mode → Execute Once Guarantees a single instance of that node across all runs (single‑worker only)
Execute After (node) Execute After field Serialises downstream nodes, turning parallel branches into a chain
Concurrency Limit (workflow) Settings → Execution → Max Concurrent Runs Caps total parallel runs, useful for low‑throughput pipelines
Distributed Mutex External (Redis, DB advisory lock, etc.) Provides fine‑grained, cluster‑wide mutual exclusion

Adding a Distributed Mutex (Redis Example)

Purpose: Ensure only one n8n worker can modify a given resource at a time, even in a horizontally‑scaled deployment.

2.1 Prerequisites

Requirement How to satisfy
Reachable Redis instance docker run -p 6379:6379 redis:7-alpine
n8n node‑js runtime (v1.0+) Built‑in
ioredis library in the n8n container (optional) Add to Dockerfile: RUN npm install ioredis

2.2 Acquire Lock – Function Node

// Acquire a Redis lock with a 30‑second TTL
const Redis = require('ioredis');
const redis = new Redis({ host: 'redis', port: 6379 });

const lockKey   = `n8n:lock:${$json.resourceId}`;
const lockTTL   = 30000; // ms
const lockValue = `${$executionId}-${Date.now()}`;

// NX = set only if not exists, PX = expiry in ms
const acquired = await redis.set(lockKey, lockValue, 'PX', lockTTL, 'NX');

if (!acquired) {
  throw new Error('Could not acquire lock – another run is processing this resource');
}

return [{ json: { lockKey, lockValue } }];

EEFA note: Never rely on in‑memory variables for locking when you have more than one n8n worker. A distributed lock survives restarts and guarantees mutual exclusion across the cluster.

2.3 Critical Section – HTTP Request Node

{
  "url": "https://api.example.com/update",
  "method": "POST",
  "jsonParameters": true,
  "options": {
    "bodyContent": "={{ $json.payload }}",
    "headers": {
      "Content-Type": "application/json"
    }
  }
}

What it does: Sends the payload that required exclusive access. The node runs only after the lock has been successfully acquired.

2.4 Release Lock – Function Node

const Redis = require('ioredis');
const redis = new Redis({ host: 'redis', port: 6379 });

await redis.del($json.lockKey);
return [{ json: { released: true } }];

Why it matters: Guarantees the lock is cleared even if downstream nodes succeed or fail (place this node in the **finally** path of the workflow). If you encounter any n8n stuck executions detection resolve them before continuing with the setup.

2.5 Minimal Workflow Skeleton

{
  "nodes": [
    { "name": "Acquire Lock",   "type": "Function",   "position": [250,300] },
    { "name": "Critical Section","type": "HTTP Request","position": [500,300] },
    { "name": "Release Lock",   "type": "Function",   "position": [750,300] }
  ],
  "connections": {
    "Acquire Lock": { "main": [[{ "node": "Critical Section", "type": "main", "index": 0 }]] },
    "Critical Section": { "main": [[{ "node": "Release Lock", "type": "main", "index": 0 }]] }
  }
}

Checklist for a robust mutex

  • [ ] Redis reachable from **all** n8n instances.
  • [ ] Lock key includes a **resource‑specific identifier** (`resourceId`, `orderId`, …).
  • [ ] TTL shorter than the longest expected critical section (prevents dead‑locks).
  • [ ] Failure to acquire the lock triggers **exponential back‑off** retries or aborts with an alert.
  • [ ] Lock is **always released** (use a separate “Release Lock” node or a `finally` path).

Leveraging n8n’s Built‑In “Execute Once” & “Execute After”

When to use: The race condition involves only a single node and you run a **single‑worker** n8n instance.

3.1 Enable “Execute Once”

  1. Open the node that writes to the shared resource.
  2. In the **Settings** panel, toggle **Run Mode → Execute Once**.
  3. Optionally set **Maximum Concurrent Executions** to 1 (default when the toggle is on).

EEFA: This lock lives in n8n’s internal SQLite DB. In a multi‑worker environment it does not provide cluster‑wide safety; pair it with a distributed lock if you scale out.

3.2 Serialize Branches with “Execute After”

Node (Branch) Setting Result
Branch A → Write DB *default* Runs immediately
Branch B → Write DB Execute After → *Branch A → Write DB* Waits for Branch A to finish before starting

When to use: Small pipelines where the overhead of an external lock is unnecessary and you control the order of operations.


Detecting Race Conditions in Production

Goal: Turn lock activity into observable metrics and alerts.

4.1 Structured Log Example (JSON)

{
  "timestamp": "2026-01-09T12:34:56.789Z",
  "workflowId": 42,
  "executionId": "5c1a3f9b-8e2d-4a5c-9b6e-7c2d1f9e2a1b",
  "node": "Critical Section",
  "resourceId": "order-12345",
  "event": "lock_acquired",
  "lockKey": "n8n:lock:order-12345"
}
  • Log lock_acquired and lock_released events.
  • Correlate with DB duplicate‑key errors or API 409 responses to pinpoint contention windows.

4.2 Alerting Checklist

Metric / Condition Recommended Threshold
`n8n_lock_acquire_failures_total` (Prometheus) > 5 failures per minute
Duplicate‑key DB error rate Spike > 20 % over baseline
webhook retry count > 3 retries within 1 minute

When an alert fires, include the executionId and resourceId to speed up triage.

Real‑World Troubleshooting Scenarios

Scenario 1 – Duplicate Webhook Calls Overlap

Step Action
1. Compute a deterministic ID (`hash(event.payload)`) in a Set node before the DB insert.
2. Turn on Execute Once for the DB Insert node (single‑worker case).
3. Catch `ER_DUP_ENTRY` with an Error Trigger and ignore it – the operation is idempotent.
4. For multi‑worker safety, wrap the insert in a Redis lock keyed by the deterministic ID.

Scenario 2 – Parallel File Uploads to S3 Overwrite Each Other

Step Action
1. Generate a UUID for the filename: {{ $uuid() }}.
2. Add Execute After from the first Write Binary File node to the second, forcing sequential upload.
3. (Optional) Perform a `HEAD` request on the target key before upload; if the key exists, rename or abort.

Best‑Practice Checklist – Preventing Race Conditions in n8n

Checklist Practice
Identify every **shared external resource** (DB tables, files, APIs) before adding parallel branches.
Use **deterministic identifiers** (hashes, UUIDs) to make writes idempotent.
Apply **Execute Once** only when you have a **single n8n worker**.
Deploy a **distributed lock** (Redis, PostgreSQL advisory lock, DynamoDB conditional write) for multi‑worker clusters.
Set a realistic **max concurrent runs** limit on the workflow.
Emit **structured JSON logs** for lock acquisition and release.
Create **Prometheus/Grafana** alerts on lock contention and duplicate‑key errors.
Document the **lock‑key naming convention** in the workflow description for future maintainers.
Review workflow revisions regularly for newly introduced parallel branches that touch existing resources.

Conclusion

Race conditions in n8n arise whenever parallel branches compete for the same external resource. By cataloguing shared resources, serialising critical nodes with built‑in options, or when scaling out introducing a distributed mutex, you can guarantee deterministic outcomes without sacrificing concurrency. Coupled with structured logging and alerting, these patterns give you production‑grade visibility and safety, ensuring that your n8n automations remain reliable as they grow.

Leave a Comment

Your email address will not be published. Required fields are marked *