Who this is for: Production engineers who need to run n8n at scale and must replace the default SQLite store with a robust relational database. We cover this in detail in the n8n Performance & Scaling Guide.
Quick Diagnosis
Problem: n8n is using the bundled SQLite DB and fails under load.
Solution: Stop n8n, dump the SQLite file, create a fresh PostgreSQL (or MySQL) database, import the dump, and point n8n to the new DB via DB_TYPE and the appropriate DB_*_URL. Restart n8n and verify that all workflows load and execute.
1. Why Migrate? – Scaling‑Driven Trade‑offs
If you encounter any database optimization resolve them before continuing with the setup.
Concurrency & Size Limits
| Feature | SQLite (default) | PostgreSQL | MySQL |
|---|---|---|---|
| Concurrent writes | Single‑writer lock → bottleneck ~100 RPS | MVCC, unlimited writers | InnoDB row‑level locking, high concurrency |
| Practical size limit | ~2 GB | TB‑scale | TB‑scale |
Operations & Performance
| Feature | SQLite (default) | PostgreSQL | MySQL |
|---|---|---|---|
| Backup / Restore | File copy (downtime) | pg_dump / pg_restore (hot) |
mysqldump / mysql (hot) |
| Replication | None | Built‑in streaming replication | Master‑slave replication |
| Typical performance | OK for dev / low traffic | Faster queries, better indexing | Comparable to Postgres, excels on read‑heavy workloads |
| Production note (EEFA) | Not production‑ready for >10 concurrent workflows | Set max_connections > expected workers (default 100) |
Tune innodb_buffer_pool_size ≥ 70 % of RAM |
EEFA: Switching DB is a breaking change – always test in a staging environment before touching production data.
2. Pre‑Migration Checklist
| Item | Description | How to Verify |
|---|---|---|
| Backup SQLite file | Preserve n8n.sqlite as the source of truth |
cp ~/.n8n/n8n.sqlite ~/.n8n/backups/n8n.sqlite.$(date +%F) |
| Export environment | Capture current .env values |
printenv | grep N8N_ > ~/n8n_env_backup.txt |
| Provision target DB | Create empty PostgreSQL/MySQL DB & user | psql -c "\l" or mysql -e "SHOW DATABASES;" |
| Check DB driver availability | n8n includes pg and mysql2; ensure they are present |
npm ls pg mysql2 |
| Plan downtime window | Migration requires stopping n8n | Communicate with stakeholders |
| Create rollback plan | Keep original SQLite and a DB snapshot | Document steps to revert (stop n8n, replace SQLite, restart) |
3. Step‑by‑Step Migration Procedure
3.1 Stop n8n Gracefully
# Systemd sudo systemctl stop n8n
# Docker Compose docker compose stop n8n
EEFA: Do not use kill -9. A graceful stop flushes pending executions and prevents SQLite corruption.
3.2 Export SQLite Data (optional JSON audit)
sqlite3 ~/.n8n/n8n.sqlite ".mode json" ".output n8n_export.json" \ "SELECT * FROM WorkflowEntity;"
The JSON export is handy for audit trails; for large DBs prefer a binary dump (.dump).
3.3 Create Target Database
PostgreSQL – Create user & database
sudo -u postgres psql <<SQL CREATE USER n8n_user WITH PASSWORD 'StrongP@ssw0rd!'; CREATE DATABASE n8n_db OWNER n8n_user; GRANT ALL PRIVILEGES ON DATABASE n8n_db TO n8n_user; SQL
MySQL – Create user & database
If you encounter any cache layer implementation resolve them before continuing with the setup.
mysql -u root -p <<SQL CREATE DATABASE n8n_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; CREATE USER 'n8n_user'@'%' IDENTIFIED BY 'StrongP@ssw0rd!'; GRANT ALL ON n8n_db.* TO 'n8n_user'@'%'; FLUSH PRIVILEGES; SQL
3.4 Migrate Data
Option A – Direct migration with the official helper (recommended ≤ 10 k rows)
npm i -g n8n-migration-tool
# PostgreSQL target n8n-migration-tool \ --source sqlite \ --source-path ~/.n8n/n8n.sqlite \ --target postgres \ --target-url postgresql://n8n_user:StrongP@ssw0rd!@db-host:5432/n8n_db
# MySQL target n8n-migration-tool \ --source sqlite \ --source-path ~/.n8n/n8n.sqlite \ --target mysql \ --target-url mysql://n8n_user:StrongP@ssw0rd!@db-host:3306/n8n_db
EEFA: Run with --dry-run first to see row counts and verify no errors.
Option B – Manual CSV dump & import (for > 10 k rows)
- Export each table to CSV
sqlite3 ~/.n8n/n8n.sqlite ".mode csv" ".output workflow.csv" \ "SELECT * FROM WorkflowEntity;"
(Repeat for
CredentialEntity,ExecutionEntity, etc.) - Create tables in the target DB – n8n ships a schema file
schema.sql.# PostgreSQL psql -U n8n_user -d n8n_db -f schema.sql
# MySQL mysql -u n8n_user -p n8n_db < schema.sql
- Import CSV files
# PostgreSQL psql -U n8n_user -d n8n_db -c "\copy WorkflowEntity FROM 'workflow.csv' CSV HEADER;"
# MySQL LOAD DATA INFILE '/path/workflow.csv' INTO TABLE WorkflowEntity FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 ROWS;
EEFA: Ensure CSV files are UTF‑8; otherwise special characters in workflow names may corrupt.
3.5 Reconfigure n8n to Use the New DB
Edit (or create) your .env file:
# PostgreSQL configuration DB_TYPE=postgresdb DB_POSTGRESDB_URL=postgresql://n8n_user:StrongP@ssw0rd!@db-host:5432/n8n_db
# MySQL configuration (alternative) # DB_TYPE=mysqldb # DB_MYSQLDB_URL=mysql://n8n_user:StrongP@ssw0rd!@db-host:3306/n8n_db
EEFA: Never commit plain credentials. Use Docker secrets, Kubernetes Secrets, or a vault.
3.6 Restart n8n & Verify
# Systemd sudo systemctl start n8n
# Docker Compose docker compose up -d n8n
Verification steps
- Open the n8n UI → Workflows → All – all entries should be present.
- Open a few workflows and click Execute – ensure they run without DB errors.
- Check Execution List for historic runs.
- If you encounter any multi instance sync resolve them before continuing with the setup.
4. Post‑Migration Validation & Troubleshooting
| Symptom | Likely Cause | Fix |
|---|---|---|
| relation “workflow_entity” does not exist | Schema not loaded in target DB | Re‑run the schema.sql import, then re‑import data. |
| Authentication failed for user | Incorrect DB URL or unescaped special chars | URL‑encode (! → %21, @ → %40) or wrap password in quotes. |
| Workflows stuck in Running | n8n still pointing to old SQLite cache | Clear ~/.n8n/.cache and restart. |
| Missing credentials after migration | Credentials encrypted with SQLite‑specific key | Re‑enter API keys, OAuth tokens manually. |
| Performance still poor after migration | DB connection pool too small | Set DB_MAX_CONNECTIONS (default 10) to a higher value, e.g., DB_MAX_CONNECTIONS=50. |
EEFA: Encrypted credential values are not portable across DB engines because the encryption key derives from the driver. After migration, re‑enter any external service credentials.
5. Automating Future Migrations (CI/CD Friendly)
# .github/workflows/n8n-db-migrate.yml
name: n8n DB Migration
on:
workflow_dispatch:
inputs:
target:
description: 'postgres|mysql'
required: true
default: 'postgres'
jobs:
migrate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install migration tool
run: npm i -g n8n-migration-tool
- name: Run migration
env:
SQLITE_PATH: ${{ secrets.SQLITE_PATH }}
PG_URL: ${{ secrets.PG_URL }}
MYSQL_URL: ${{ secrets.MYSQL_URL }}
run: |
if [[ "${{ github.event.inputs.target }}" == "postgres" ]]; then
n8n-migration-tool --source sqlite --source-path $SQLITE_PATH \
--target postgres --target-url $PG_URL
else
n8n-migration-tool --source sqlite --source-path $SQLITE_PATH \
--target mysql --target-url $MYSQL_URL
fi
Benefits
- Version‑controlled migration steps.
- Secrets managed by the CI platform – no plaintext credentials in the repo.
- Easy rollback by re‑running the workflow with the previous DB URL.
Conclusion
Migrating n8n from SQLite to PostgreSQL or MySQL eliminates the single‑writer bottleneck, lifts size constraints, and unlocks replication and hot‑backup capabilities essential for production workloads. By following the checklist, using the official migration tool (or a manual CSV path for large datasets), and re‑configuring n8n’s environment variables, you achieve a seamless transition with minimal downtime. Remember to re‑enter encrypted credentials after the move, tune connection pools, and validate workflow execution. This migration prepares your n8n installation for real‑world traffic and long‑term reliability.



