n8n execution time increasing over time – memory leak or database bloat?

Step by Step Guide to solve why n8n execution time increases over time
Step by Step Guide to solve why n8n execution time increases over time


Who this is for: n8n maintainers and DevOps engineers who see a single workflow start fast and then drift to > 30 seconds per run. We cover this in detail in the n8n Performance Degradation & Stability Issues Guide.


Quick Diagnosis

  1. Open the workflow’s Execution HistoryDuration chart.
  2. Enable “Clear Execution Data” after each run (or set a retention limit).
  3. Insert a Set node to truncate large arrays/objects before downstream processing.

If the upward slope stops, you’ve likely solved the problem.


1. Execution‑Time Anatomy

If you encounter any n8n workflows slow after weeks in production root cause analysis resolve them before continuing with the setup.

Phase Typical Time (ms)
Trigger – receives event (HTTP, Cron, Webhook) 5‑20
Node Processing – runs internal logic 10‑200 per node
Data Serialization – stores intermediate JSON 5‑30
Execution Persistence – writes full execution record 10‑100
Cleanup – garbage collection & DB cleanup 5‑15

EEFA note – n8n persists every execution in the DB. An ever‑growing execution_entity table makes each INSERT/SELECT slower, eventually inflating total duration.


2. Real‑World Root Causes

Root Cause Symptom
Unbounded Execution History Duration chart climbs, DB size ↑
Large Payloads / Untrimmed Data “Set” or “Function” nodes log huge objects
Memory Leaks in Custom Functions Execution spikes after dozens of runs
Inefficient Looping in “Function” Nodes 2‑3 × slowdown after 10 runs
External API Rate‑Limiting / Retries Random spikes, then steady rise
Improper Use of “Wait” / “Delay” Nodes Fixed 5 s delay becomes 30 s due to backlog

3. Investigation Checklist

If you encounter any n8n slows down even with low cpu usage resolve them before continuing with the setup.

3.1 Capture Baseline

Export the last 10 executions (CSV) and note duration_ms.

3.2 Inspect DB Size

# PostgreSQL example
SELECT pg_total_relation_size('public.execution_entity') AS size_bytes;

Tip – > 500 MB on modest traffic signals history bloat.

3.3 Identify Bloated Nodes

Open the CSV, sort by node_execution_time, and flag the top offenders.

3.4 Log Payload Size

Add a temporary Function node before the heavy node:

// Log payload size in bytes
const size = Buffer.byteLength(JSON.stringify($json), 'utf8');
console.log(`Payload size: ${size} bytes`);
return $json;

If size exceeds ~50 KB, consider pruning.

3.5 Monitor V8 GC Pauses (Docker)

docker run -e NODE_OPTIONS="--trace-gc" n8nio/n8n

Look for Mark‑Compact pauses > 50 ms.

3.6 Review External Calls

Check the HTTP Request node’s “Response Time” field; compare against expected API latency.

Checklist Summary

Done Action
Baseline captured – Export recent execution durations
DB size checked – Keep < 200 MB for low‑traffic setups
Bloated node identified – Pinpoint node(s) with highest runtime
Payload size logged – Verify JSON < 50 KB before heavy processing
GC pauses inspected – No > 30 ms V8 pauses
API latency verified – No > 2 × expected response time

4. Proven Fixes & Configuration Tweaks

4.1 Trim Execution History

# n8n environment variables
EXECUTIONS_PROCESS=main
EXECUTIONS_DATA_SAVE_MAX_DAYS=7          # keep 7 days of data
EXECUTIONS_DATA_SAVE_MAX_EXECUTIONS=1000 # cap total rows

Why it works – Limits INSERT volume, keeping index scans fast.
EEFA warningMAX_DAYS=0 disables history entirely; you lose audit trails.

4.2 Prune Data with “Set” / “Remove” Nodes

{
  "type": "n8n-nodes-base.set",
  "parameters": {
    "keepOnlySet": [
      "id",
      "status",
      "timestamp"
    ]
  }
}

Only essential fields survive downstream, cutting serialization time by up to 70 %.

4.3 Refactor Heavy “Function” Logic

Before (O(N²) nested loops)

let result = [];
for (let i = 0; i < items.length; i++) {
  for (let j = 0; j < items.length; j++) {
    if (items[i].value === items[j].value) result.push(items[i]);
  }
}
return result;

After (Hash‑Map O(N))

const map = new Map();
items.forEach(item => map.set(item.id, item));
return Array.from(map.values());

Typical reduction: 200 ms → 15 ms on 10 k rows.

4.4 Batch External Calls

Step 1 – Aggregate IDs (Function node)

// Collect IDs for a single API call
const ids = items.map(i => i.json.id);
return [{ json: { ids } }];

Step 2 – HTTP Request node – send ids as a JSON array in the body.

Benefit – Cuts network round‑trips, reducing overall latency by 30‑50 %.

4.5 Enable Node‑Level Caching (n8n ≥ 1.2)

In the node’s **Advanced** tab:

  • Cache: true
  • Cache TTL (seconds): 300

Use for idempotent API calls where data changes rarely.


5. Ongoing Monitoring & Alerting

Metric Target Alert Threshold
Avg Execution Duration (last 100 runs) ≤ 500 ms > 1 s → Slack/Email
DB Size ≤ 200 MB > 300 MB → PagerDuty
GC Pause > 30 ms > 50 ms → Ops dashboard
External API Latency ≤ 2 s > 5 s → Incident

Prometheus Export (n8n built‑in metrics)

scrape_configs:
  - job_name: 'n8n'
    static_configs:
      - targets: ['localhost:5678']
    metrics_path: '/metrics'

Key metrics: n8n_execution_duration_seconds, n8n_db_size_bytes.


6. Production Checklist

  • Set EXECUTIONS_DATA_SAVE_MAX_DAYS / MAX_EXECUTIONS.
  • Add “Set” nodes to prune payloads > 50 KB.
  • Refactor custom JavaScript to linear/sub‑linear complexity.
  • Batch external API calls wherever possible.
  • Enable node‑level caching for read‑only lookups.
  • Schedule a nightly DB vacuum (PostgreSQL) or VACUUM FULL.
  • Monitor the four key metrics above and configure alerts.

Conclusion

Execution‑time bloat in n8n is almost always traceable to unbounded history, oversized payloads, or inefficient custom logic. By capping execution retention, pruning data early, refactoring heavy loops, batching external calls, and enabling node‑level caching, you halt the upward drift and keep automations snappy at scale. Continuous monitoring of duration, DB size, GC pauses, and API latency ensures the fix remains effective in production.

Leave a Comment

Your email address will not be published. Required fields are marked *