
Step by Step Guide to solve n8n SQLite Disk Full Error
Who this is for: n8n operators running the default SQLite database on Linux (systemd or Docker) who need to diagnose, remediate, and prevent storage‑exhaustion failures. For a complete guide on managing SQLite in n8n, including errors, performance, and migration, check out our n8n SQLite pillar guide.
Quick Diagnosis
| Step | Action | Command |
|---|---|---|
| Detect | Compare DB size with free disk space | see *Detecting storage exhaustion* |
| Fix now | Remove executions older than 30 days | n8n execution delete --older-than 30d |
| Prevent | Enable automatic pruning & vacuum | add env vars (see *Retention policy*) |
| Verify | Restart n8n and run a manual VACUUM |
sqlite3 $HOME/.n8n/database.sqlite "VACUUM;" |
1. Detecting storage exhaustion
1.1 Compare SQLite file size with partition free space
# Path to the SQLite file (default) DB_PATH=$HOME/.n8n/database.sqlite
# Size of the DB in megabytes DB_SIZE_MB=$(du -m "$DB_PATH" | cut -f1)
# Free space on the DB’s partition (megabytes)
FREE_SPACE_MB=$(df -m "$(dirname "$DB_PATH")" | tail -1 | awk '{print $4}')
# Report
echo "DB size: ${DB_SIZE_MB} MB"
echo "Free space: ${FREE_SPACE_MB} MB"
Tip – If FREE_SPACE_MB < DB_SIZE_MB × 0.2 (less than 20 % free), you’re approaching the failure point.
1.2 Watch the logs for the exact SQLite error
journalctl -u n8n -f | grep -i "SQLITE_FULL"
| Log snippet | Meaning | Immediate action |
|---|---|---|
SQLITE_FULL: database or disk is full |
Disk exhausted while writing execution data | Run cleanup (see §2) |
SQLITE_BUSY: database is locked |
Concurrency issue, often secondary to disk pressure | Verify disk space first |
SQLITE_CORRUPT |
DB corruption – may require restore from backup | Stop n8n, backup DB, restore |
1.3 Health endpoint (if enabled)
curl -s http://localhost:5678/health | jq '.sqliteDiskFull'
A response of true confirms SQLite reports a full disk.
2. Immediate remediation – cleaning old execution data
2.1 Using the built‑in CLI prune command
# Delete executions older than 30 days (default retains 90 days) n8n execution delete --older-than 30d
Pros: No direct DB access; respects n8n’s cascade deletions.
Cons: Can be slow on very large tables.
Learn recovery steps for a corrupt SQLite database and other storage-related issues in n8n.
2.2 Manual SQLite cleanup (for massive DBs)
⚠️ EEFA – Always back up the DB before running raw DELETE statements.
Step 1 – Backup the database
cp "$DB_PATH" "$DB_PATH.backup_$(date +%F_%H%M%S)"
Step 2 – Delete old rows inside a transaction
sqlite3 "$DB_PATH" <<SQL
BEGIN TRANSACTION;
DELETE FROM execution_entity
WHERE finishedAt < datetime('now','-30 days');
DELETE FROM execution_data
WHERE executionId NOT IN (SELECT id FROM execution_entity);
COMMIT;
SQL
Step 3 – Reclaim space with VACUUM
sqlite3 "$DB_PATH" "VACUUM;"
Safety checklist
- Stop the n8n service (
systemctl stop n8n) or put it in read‑only mode. - Verify the backup can be restored (
sqlite3 backup.db ".dump"). - Run the
DELETEinside a transaction (as shown). - Run
VACUUMafter deletion. - Restart n8n and confirm the error is gone.
- Implement backup and restore strategies to protect against database corruption and storage problems in n8n.
3. Configuring retention policies to prevent recurrence
3.1 Environment variables
| Variable | Default | Recommended (low‑disk) | Description |
|---|---|---|---|
| EXECUTIONS_DATA_SAVE_MAX_DAYS | 90 | 30 | Max age of execution data (JSON) before automatic deletion. |
| EXECUTIONS_DATA_PRUNE | false | true | Enables background pruning job. |
| EXECUTIONS_DATA_PRUNE_MAX_DAYS | 90 | 30 | Age threshold used by the prune job. |
| DB_SQLITE_VACUUM_ON_STARTUP | false | true | Runs VACUUM on each restart to shrink the file. |
Add the recommended settings to your .env (or Docker environment: block) and reload n8n:
DB_SQLITE_VACUUM_ON_STARTUP=true EXECUTIONS_DATA_SAVE_MAX_DAYS=30 EXECUTIONS_DATA_PRUNE=true EXECUTIONS_DATA_PRUNE_MAX_DAYS=30
3.2 Automating pruning with cron
# Prune every day at 02:30 AM 30 2 * * * /usr/local/bin/n8n execution delete --older-than 30d \ >> /var/log/n8n-prune.log 2>&1
If you cannot enable DB_SQLITE_VACUUM_ON_STARTUP, add a nightly vacuum:
# Vacuum at 02:45 AM 45 2 * * * sqlite3 $HOME/.n8n/database.sqlite "VACUUM;" \ >> /var/log/n8n-vacuum.log 2>&1
4. Advanced options
4.1 Relocating the SQLite file
Mount a dedicated volume (e.g., an SSD) at $HOME/.n8n and symlink database.sqlite to the new location. This isolates DB growth from other system files.
4.2 Switching to PostgreSQL
When workloads regularly approach storage limits, migrate to PostgreSQL (DB_TYPE=postgresdb). PostgreSQL handles vacuuming natively and removes the single‑file size ceiling. Optimize performance and prevent storage issues like corruption and disk full errors in n8n SQLite.
⚠️ EEFA – Migration requires a full export (n8n export:workflow) and re‑import; do not attempt in‑place conversion.
5. Proactive monitoring & backup
- Prometheus alert – trigger when free space on the DB partition falls below 2 GB.
- Hourly backup – dump the DB to an off‑site bucket:
sqlite3 $HOME/.n8n/database.sqlite ".backup '/backup/n8n-$(date +%F_%H%M%S).sqlite'"
Conclusion
The “disk full” error in n8n is a direct symptom of unchecked SQLite growth. By regularly comparing DB size with available disk space, pruning old executions via the CLI or a safe manual DELETE, and enforcing short retention periods with EXECUTIONS_DATA_* variables, you keep the database lean and avoid runtime failures. Enabling VACUUM on startup (or via cron) reclaims freed pages, ensuring the SQLite file stays compact. For long‑term scalability, consider moving the file to dedicated storage or migrating to PostgreSQL. Implement these steps now, and n8n will continue to run reliably in production.



