
Step by Step Guide Optimize Redis for n8n Performance
Who this is for: n8n developers and ops engineers who run production‑grade workflows backed by Redis and need low latency, high reliability, and predictable scaling. For a complete overview of Redis usage, errors, performance tuning, and scaling in n8n, check out our detailed guide on Redis for n8n Workflows.
Quick Diagnosis
- maxmemory: ≤ 75 % of RAM,
allkeys‑lrupolicy. - Connection pool:
max = CPUs × 2, TCP keep‑alive = 30 s. - Network: bind to a private IP,
tcp‑keepalive 60,maxclients≥workers × pool.max. - Monitor: latency < 1 ms, evicted keys low, pool pending ≤ 5.
- Fail‑fast: ioredis timeout ≤ 2 s + fallback logic.
1. Choose the Right maxmemory Policy
Why it matters: Redis evicts keys when it reaches its memory ceiling. Picking the correct eviction policy prevents unexpected data loss while keeping the cache hot for n8n’s transient state.
| Policy | When to Use | Behaviour |
|---|---|---|
noeviction |
Small, static datasets | Commands that exceed memory return an error. |
allkeys‑lru |
Large, frequently‑changing key sets | Evicts least‑recently‑used keys, regardless of TTL. |
volatile‑lru |
You store only expiring keys | Evicts LRU keys with a TTL. |
allkeys‑random |
Simple, low‑latency requirement | Random key eviction. |
volatile‑ttl |
TTL‑driven eviction needed | Evicts keys with the shortest remaining TTL first. |
Apply the policy
# /etc/redis/redis.conf maxmemory 4gb # ≤ 75 % of total RAM maxmemory-policy allkeys-lru # Best balance for n8n workloads
EEFA: On Docker/K8s, match the container’s --memory limit to maxmemory. A mismatch triggers OOM kills that surface as “Redis connection reset by peer” in n8n logs.
2. Connection Pooling for n8n’s Redis Nodes
Why it matters: Each n8n node creates a single Redis socket by default. Under high concurrency this becomes a bottleneck. A connection pool spreads the load across multiple sockets and reuses them efficiently. Dockerized Redis is the starting point. Complete the setup using this tutorial Docker Redis setup for n8n, then proceed with the next steps.
2.1. Install the pool wrapper
npm install ioredis-connection-pool
2.2. Create a reusable pool
// pool‑init.js – load once per n8n worker
const RedisPool = require('ioredis-connection-pool');
const os = require('os');
const pool = new RedisPool({
max: Math.max(2, os.cpus().length * 2), // CPUs × 2 connections
min: 1,
idleTimeoutMillis: 30000,
acquireTimeoutMillis: 5000,
config: {
host: 'redis.internal',
port: 6379,
password: process.env.REDIS_PASSWORD,
keepAlive: 30000, // 30 s TCP keep‑alive
},
});
module.exports = pool;
2.3. Use the pool in a Function node
// example‑usage.js – inside an n8n Function node
const pool = require('./pool-init');
async function storeState(workflowId, data) {
const key = `workflow:${workflowId}:state`;
await pool.set(key, JSON.stringify(data));
}
async function loadState(workflowId) {
const key = `workflow:${workflowId}:state`;
const raw = await pool.get(key);
return raw ? JSON.parse(raw) : null;
}
2.4. Expose pool metrics for monitoring
// metrics.js – optional Prometheus exporter
const pool = require('./pool-init');
function exportPoolStats() {
const { active, idle, pending } = pool.stats();
// push to Prometheus or your preferred monitoring system
}
Pool‑specific tuning table
| Parameter | Recommended Value | Reason |
|---|---|---|
| max (connections) | Math.max(2, os.cpus().length * 2) | Matches CPU parallelism, avoids socket exhaustion. |
| min | 1 | Guarantees at least one ready connection for cold starts. |
| idleTimeoutMillis | 30000 (30 s) | Frees idle sockets without hurting burst traffic. |
| acquireTimeoutMillis | 5000 (5 s) | Fails fast if pool is saturated, letting n8n retry. |
EEFA: In production, watch pool.stats().pending. Persistent pending requests mean you need a larger max or additional n8n workers.
3. Network & System‑Level Tweaks
Why it matters: Redis is latency‑sensitive. Optimizing the network stack and OS parameters reduces round‑trip time and prevents connection stalls during bursty workflow execution.
3.1. Bind Redis to a private interface
# redis.conf bind 10.0.2.15 # Private VPC IP, not 0.0.0.0 protected-mode yes
Result: Isolates traffic, lowers latency, and prevents accidental exposure.
3.2. Enable TCP keep‑alive
tcp-keepalive 60 # seconds
Keeps long‑running n8n jobs from silently dropping connections.
3.3. Raise maxclients
maxclients 2000 # Example for 200 workers × 4 connections + safety margin
EEFA: Compute workers × pool.max + safety margin. For 200 workers with a pool of 4, 200 × 4 + 500 = 1300, so maxclients 2000 provides headroom.
3.4. Linux kernel tuning (optional but powerful)
# /etc/sysctl.conf net.core.somaxconn = 65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_fin_timeout = 15
Apply with sysctl -p. These settings raise the listen backlog and recycle TIME_WAIT sockets faster, which matters under bursty workflow execution. Before continuing, make sure your Redis instance is secure. I’ve explained the process in detail Securing Redis for n8n. Once done, return here to proceed.
4. Monitoring & Alerting Checklist
| Metric | Warning Threshold | Critical Threshold |
|---|---|---|
| used_memory (% of maxmemory) | > 80 % | > 90 % |
| evicted_keys per minute | > 10 | > 50 |
| Pool pending count | > 5 | > 20 |
| Avg latency (95th pct) | > 1 ms | > 5 ms |
Quick checks
- Latency:
redis-cli --latency– keep 95th percentile < 1 ms. - Evictions:
INFO stats→evicted_keys. Spikes indicatemaxmemoryis too low. - Pool health: expose
pool.stats()to Prometheus (active,idle,pending). - CPU/Memory:
top/htop– Redis should stay < 70 % CPU on a dedicated core.
5. Production‑Grade “Fail‑Fast” Strategy
Why it matters: When Redis stalls, downstream n8n workflows can back‑up and exhaust job queues. A short timeout with graceful fallback keeps the system responsive.
// fail‑fast‑example.js – inside an n8n Function node
try {
const result = await pool.get(key, { timeout: 2000 }); // 2 s timeout
// Process result...
} catch (err) {
// Immediate fallback – e.g., store state in a temporary file or skip step
$set("fallback", true);
}
EEFA: The timeout forces ioredis to abort after 2 s, preventing a cascade of stalled executions that could fill the n8n job queue.
6. Recap (One‑Click Checklist)
- maxmemory: ≤ 75 % of RAM, policy
allkeys‑lru. - Connection pool:
max = CPUs × 2, keepalive = 30 s. - Network: bind to private IP,
tcp‑keepalive 60,maxclients≥workers × pool.max. - Monitor: latency < 1 ms, evicted_keys low,
pool.pending≤ 5. - Fail‑fast: set ioredis timeout ≤ 2 s, implement fallback logic.
Next Steps
If you’ve stabilized performance, consider moving to Redis Cluster for horizontal scaling, or enable Redis Streams to decouple long‑running workflow steps. See the sibling pages linked above for step‑by‑step guides.



