
Who this is for: DevOps engineers and backend developers operating n8n in production with SQLite‑backed queue mode. We cover this in detail in the n8n Queue Mode Errors Guide.
Quick Diagnosis
Enable WAL mode and set a busy timeout of ≥ 5000 ms in the SQLite connection string, then restart the n8n worker. This removes the exclusive lock that blocks the queue.
# Example env var (Docker or .env) DB_SQLITE_DATABASE=/home/node/.n8n/database.sqlite?busy_timeout=5000&journal_mode=WAL
What’s Happening?
If you encounter any n8n queue mode database connection failure resolve them before continuing with the setup.
| Symptom | Typical Log Message | Immediate Cause |
|---|---|---|
| Queue stops processing | Error: SQLITE_BUSY: database is locked | SQLite holds an exclusive lock while a long‑running workflow writes to the same file. |
| High CPU / I/O spikes | sqlite3: database is locked repeated | Contention between the queue consumer and workflow executions that also use queue.sqlite. |
Fast Fix (≈30 s): Add the connection‑string flags above and restart n8n.
1. How SQLite Locking Interacts with n8n Queue Mode
If you encounter any n8n queue mode redis unavailable resolve them before continuing with the setup.
n8n’s queue mode stores pending jobs in queue.sqlite. SQLite lock levels:
| Lock Type | When it’s taken | Effect on other connections |
|---|---|---|
| SHARED | Simple SELECT | Allows other reads |
| RESERVED | BEGIN TRANSACTION (write intent) | Blocks other writers, permits reads |
| EXCLUSIVE | COMMIT of a write transaction | Blocks all reads & writes until the file is flushed |
A heavy workflow write can upgrade a lock to EXCLUSIVE, causing the queue worker to receive SQLITE_BUSY.
2. Reproducing the Locking Symptom
2.1 Start n8n in queue mode (Docker)
docker run -d \ -e DB_TYPE=sqlite \ -e DB_SQLITE_DATABASE=/home/node/.n8n/database.sqlite?busy_timeout=1000 \ -e QUEUE_MODE=active \ -p 5678:5678 \ n8nio/n8n
2.2 Create a long‑running workflow that bulk‑inserts 10 000 rows
{
"nodes": [
{
"parameters": {
"operation": "executeQuery",
"query": "BEGIN; INSERT INTO big_table (value) SELECT randomblob(100) FROM generate_series(1,10000); COMMIT;"
},
"name": "Heavy Write",
"type": "n8n-nodes-base.sqlite",
"typeVersion": 1,
"position": [400, 300]
}
],
"connections": {}
}
2.3 Trigger the workflow while another job is queued
Observe the log output:
2024-10-01 12:34:56.789 [Queue] ERROR: SQLITE_BUSY: database is locked
If the lock persists > 5 s, the queue worker stalls.
3. Immediate Mitigation – Adjust SQLite Pragmas
3.1 Set a Busy Timeout
# Add to your .env or Docker env DB_SQLITE_DATABASE=/home/node/.n8n/database.sqlite?busy_timeout=5000
3.2 Switch to Write‑Ahead Logging (WAL)
# Append to the same connection string DB_SQLITE_DATABASE=/home/node/.n8n/database.sqlite?busy_timeout=5000&journal_mode=WAL
3.3 Verify Pragmas at Runtime
sqlite3 /home/node/.n8n/database.sqlite "PRAGMA journal_mode; PRAGMA busy_timeout;" # Expected output: # wal # 5000
Checklist: Immediate Fix
| Step | Description |
|---|---|
| 1 | Add busy_timeout ≥ 5000 ms to DB_SQLITE_DATABASE. |
| 2 | Add journal_mode=WAL to the same string. |
| 3 | Restart n8n (docker restart <container>). |
| 4 | Confirm pragmas with the sqlite3 command. |
| 5 | Re‑run the failing workflow; the queue should resume. |
EEFA Note: In high‑throughput production, a 5 s timeout may still cause latency spikes. Monitor and increase if needed.
4. Long‑Term Solutions
4.1 Move Queue Storage to an External DB
Switch the queue backend to PostgreSQL or MySQL to eliminate file‑lock contention.
# docker‑compose.yml excerpt environment: - DB_TYPE=postgresdb - DB_POSTGRESDB_HOST=postgres - DB_POSTGRESDB_PORT=5432 - DB_POSTGRESDB_DATABASE=n8n - DB_POSTGRESDB_USER=n8n_user - DB_POSTGRESDB_PASSWORD=strongpassword - QUEUE_MODE=active
4.2 Partition Workflows that Perform Bulk Writes
| Approach | Pros | Cons |
|---|---|---|
| Separate n8n instance for heavy ETL | No lock contention with queue | Higher operational overhead |
Offload to external script via Execute Command node |
Minimal config change | Still shares the same DB unless the script uses a different file |
4.3 Enable SQLite “Shared Cache” (Advanced)
DB_SQLITE_DATABASE=/home/node/.n8n/database.sqlite?busy_timeout=5000&journal_mode=WAL&cache=shared
EEFA Warning: Use only on local SSDs; shared cache can corrupt data on network filesystems (e.g., NFS).
5. Monitoring & Alerting – Prevent Future Stalls
| Metric | Collection Tool | Alert Threshold |
|---|---|---|
| sqlite_busy_timeout occurrences | Loki / Grafana logs | > 1 per minute |
| Queue processing latency | Prometheus (n8n_queue_job_processing_seconds) |
> 30 s |
DB file lock duration (via lsof) |
Custom script watch -n 5 "lsof -p $(pgrep -f n8n) | grep queue.sqlite" |
> 5 s |
Example Prometheus Exporter Snippet (Node.js)
const client = require('prom-client');
const gauge = new client.Gauge({
name: 'n8n_queue_job_processing_seconds',
help: 'Time taken to fetch and process a queue job',
});
async function processJob(job) {
const start = Date.now();
await runJob(job);
gauge.set((Date.now() - start) / 1000);
}
Add this exporter to your n8n Docker image and configure alerts in Grafana.
6. Frequently Asked Questions
- Q1. Will enabling WAL increase disk usage?
Yes. WAL creates a-walfile that can grow to the size of the main DB. Schedule a nightlyPRAGMA wal_checkpoint(FULL);to truncate it. - Q2. Does
busy_timeoutaffect normal query performance?
Only when a lock is present; otherwise it is a no‑op. Values < 1000 ms cause prematureSQLITE_BUSYerrors. - Q3. Can I keep SQLite for the queue but move workflow data to Postgres?
Absolutely. KeepDB_TYPE=sqlitefor the queue and use the **Postgres node** for workflow data. Ensure the queue DB path remains isolated.
Conclusion
SQLite’s exclusive lock can stall n8n’s queue worker when a workflow performs heavy writes. By configuring a busy timeout and switching to WAL mode, you give SQLite the ability to retry and allow concurrent reads, instantly unblocking the queue. For sustained production stability, consider moving the queue to a true RDBMS, isolating bulk‑write workflows, or using shared cache on fast local storage. Implement the monitoring checklist to catch lock‑related spikes before they impact users, ensuring your n8n deployment remains responsive under load.



