
Who this is for: Ops engineers, DevOps teams, and n8n power‑users who run n8n in queue mode (Redis or PostgreSQL) and need a fast, repeatable way to un‑block a hung execution. We cover this in detail in the n8n Queue Mode Errors Guide.
Quick Diagnosis
Problem: An n8n workflow stays in “pending” (or “queued”) forever, never advancing to “running” or “finished”.
One‑line fix: Find the execution’s id, reset its status with the n8n CLI (or a guarded DB update), and clear any orphaned queue lock.
1. Why Jobs Get Stuck in Queue Mode ?
If you encounter any n8n queue mode duplicate execution resolve them before continuing with the setup.
| Root cause | UI / Log symptom | Typical trigger |
|---|---|---|
| Orphaned Redis lock | `Job is still in pending state` repeats | Worker crash or container restart |
| Corrupt DB row | `status = “pending”` but `startedAt` is `null` | Manual DB edit or incomplete migration |
| Infinite loop in a node | No progress after first node runs | Custom JavaScript that never resolves |
| Resource throttling | Growing queue backlog, new jobs never dequeued | CPU/memory limits, OOM kills |
| Version mismatch (core vs. queue plugin) | `Cannot read property ‘queue’ of undefined` | Upgrade without rebuilding the plugin |
Understanding the root cause tells you whether to reset the execution, clear the queue, or fix the workflow code.
2. Locate the Stuck Job
2.1 UI (read‑only)
- Open Execution List → filter Status = Pending.
- Hover the row and copy the Execution ID (e.g.,
62f9c7b1-3e8a-4d2b-9f7a-9c4e5a2d1b3c).
EEFA: Do not click “Retry” on a stuck job; it will re‑queue the same corrupted entry.
2.2 CLI
# List the most recent pending executions (requires n8n ≥ 1.0) n8n execution:list --status=pending --limit=20
The output shows id, workflowId, and createdAt.
2.3 Direct DB Query (PostgreSQL example)
SELECT id, workflow_id, status, created_at FROM execution_entity WHERE status = 'pending' ORDER BY created_at ASC LIMIT 20;
EEFA: Run the query on a read‑only replica first to verify rows before making changes. If you encounter any n8n queue mode missing worker process resolve them before continuing with the setup.
3. Safely Release / Reset the Stuck Job
3.1 Preferred: CLI “reset” command
# Replace with the ID you located n8n execution:reset
What it does
- Sets
status = 'failed'in the DB. - Deletes the Redis lock key
n8n:queue:lock:<id>. - Emits an
execution.finishedevent so downstream triggers (e.g., Webhooks) are notified.
EEFA: The CLI respects the
executionTimeoutsetting, preventing accidental infinite loops.
3.2 Manual DB reset (when CLI unavailable)
Step 1 – Mark the execution as failed
BEGIN;
UPDATE execution_entity
SET status = 'failed',
stopped_at = NOW(),
error = 'Manually reset after queue stall'
WHERE id = '<EXECUTION_ID>';
Step 2 – Remove the Redis lock (if using Redis)
# Run this in a Redis client redis-cli DEL "n8n:queue:lock:<EXECUTION_ID>"
COMMIT;
EEFA: Wrap the
UPDATEin a transaction and take a DB snapshot before committing on production.
3.3 Flush the entire queue (last resort)
Redis backend
# List all queue keys and delete them redis-cli KEYS "n8n:queue:*" | xargs -r redis-cli DEL
PostgreSQL‑based queue (pg‑boss)
DELETE FROM pgboss.job WHERE state = 'created';
EEFA: Flushing discards **all** pending jobs. Use only during a maintenance window when the backlog can be safely dropped.
4. Verify the Fix & Prevent Recurrence
4.1 Post‑reset sanity check
n8n execution:get
You should see status = "failed" and a populated stoppedAt.
| Check | Expected result |
|---|---|
| UI shows the execution as Failed | ✅ |
| No Redis lock key exists | ✅ (redis-cli EXISTS n8n:queue:lock:<id> → 0) |
| Queue length returns to normal | ✅ (redis-cli LLEN n8n:queue:jobs) |
4.2 Preventive checklist
- Enable
executionTimeoutinconfig.yml(e.g.,executionTimeout: 300). - Monitor queue depth via Prometheus (
n8n_queue_length). - Alert when pending executions exceed 5 minutes.
- Upgrade n8n and the queue‑mode plugin together (matching major versions).
- Back up the
execution_entitytable nightly.
5. Real‑World Example: Kubernetes Deployment
Scenario: An n8n pod crashed during a large CSV import, leaving 12 jobs pending.
5.1 Identify the stuck executions
kubectl logs -l app=n8n -c n8n --since=2h | grep "pending" # Example output: Execution IDs 9b1c2d3e-...
5.2 Reset each execution from inside the pod
# Replace <POD_NAME> and <EXECUTION_ID> accordingly
kubectl exec -it $(kubectl get pod -l app=n8n -o jsonpath="{.items[0].metadata.name}") -- \
n8n execution:reset 9b1c2d3e-...
5.3 Roll out a clean queue worker
kubectl rollout restart deployment/n8n
EEFA: The rollout forces a fresh pod that starts with a clean Redis state (the PVC is ephemeral), eliminating any lingering lock keys.
Conclusion
A “job stuck” condition is almost always a state‑sync mismatch between the execution table and the queue backend. The reliable remedy is to reset the execution (CLI preferred), clear any orphaned lock, and then verify that the queue depth returns to normal. Reinforce the fix with timeout settings, queue‑depth monitoring, and version‑aligned upgrades to keep n8n running smoothly in production.



