
Docker Container Exits Immediately After Start – Fix by Exit Code
You ran docker run or docker compose up. The container flashed on for a second. Now docker ps is empty.
Nothing crashed loudly. No red error. Just silence.
This is one of the most disorienting Docker failures because the fix depends entirely on why it exited – and that answer is already sitting in the exit code. Most guides skip straight to guessing. This one doesn’t.
Read the exit code first. Then fix the right thing.
⚠️ Run These Two Commands Before Anything Else
These two commands together give you the exit code and the actual error message. Run them immediately after the container disappears – before you change anything.
| Command | What it tells you |
|---|---|
docker ps -a |
Lists all containers including stopped ones. Find your container and check the STATUS column – it shows the exit code as Exited (N). |
docker logs <container_name> |
Shows stdout/stderr from the container’s last run. This is the actual error message – the exit code tells you which section of this guide to read. |
Who this is for: Developers and system administrators running Docker Compose stacks – n8n, Postgres, Redis, Traefik, or anything self-hosted where a container disappears immediately after start and docker ps is empty. For the broader Docker environment variable setup, see the Docker Compose .env Not Working guide.
How to Read Exit Codes Before Guessing at a Fix?
Every container exit leaves a number behind. That number is not random – it directly encodes what happened. Ignoring it and jumping to fixes is how you spend an hour changing the wrong thing.
Get the exit code with:
docker inspect <container_name> --format='{{.State.ExitCode}}'
Or read it directly from docker ps -a in the STATUS column.
Here is what each code means before you do a single other thing:
| Exit Code | Plain meaning | Where to go |
|---|---|---|
0 |
Process finished cleanly. This is intentional. | Exit code 0 section below |
1 |
Application error or crash. | Exit code 1 section below |
137 |
OOM killed – the container ran out of memory. | Exit code 137 section below |
126 |
Entrypoint script not executable. | Permission issue — chmod +x the script |
127 |
Command not found inside the container. | Wrong binary path in CMD or ENTRYPOINT |
2 |
Misuse of shell command – usually a syntax error in a shell script. | Check your ENTRYPOINT shell script syntax |
Exit codes above 128 are always signal-based kills: 137 = 128 + 9 (SIGKILL from OOM), 143 = 128 + 15 (SIGTERM from docker stop). If your exit code is 143, the container was stopped gracefully – that is usually expected.
Exit Code 0: Your Process Completed: That’s Working as Designed
Exit code 0 is not a crash. It means the container’s main process ran to completion and returned success. Docker has no reason to keep a container alive once PID 1 exits – it shuts down immediately and cleanly.
The two situations that produce this:
1. You’re running a one-shot command, not a long-running service. If your CMD is a script that imports data, runs a migration, or prints something and finishes – exit 0 is correct behavior. The container did its job.
2. Your service process daemonizes itself and hands off PID 1. This is the classic nginx trap. The default nginx CMD forks a background worker and exits the parent. From Docker’s perspective, PID 1 is gone – container exits with code 0 even though nginx is “running.”
# This causes exit 0 - nginx daemonizes and abandons PID 1 CMD ["nginx"] # This keeps the container alive - nginx runs in foreground as PID 1 CMD ["nginx", "-g", "daemon off;"]
Any service that has a “background mode” or “daemon mode” option has this problem: Apache, Gunicorn, Celery workers configured with & in a shell script. The fix is always the same – find the foreground flag and use it, or restructure your entrypoint so the blocking process is the one Docker watches as PID 1.
For debugging: override the entrypoint to get a shell and manually run your startup command. If it returns immediately, that is your problem.
docker run -it --entrypoint sh your-image # Inside the container: your-start-command # Does it return to the prompt? Then it daemonized. Find the foreground flag.
Exit Code 1: Application Crash – How to Get the Actual Error?
Exit code 1 is the most common failure and the most fixable — once you read the actual error instead of guessing at it. The container crashed and the reason is almost always in the logs.
docker logs <container_name>
If that output is empty or unhelpfully short, try:
docker logs --tail 50 <container_name>
docker inspect <container_name> --format='{{.State.Error}}'
The most common root causes behind exit code 1, in order of frequency:
| Cause | What the logs say | Fix |
|---|---|---|
| Missing required environment variable | Error: DB_HOST is not defined or similar |
Add the variable to your .env file and ensure env_file: is set in the service |
| Can’t connect to a dependent service (DB, Redis) | ECONNREFUSED, Connection refused, could not connect to server |
Add depends_on: + a healthcheck, or increase startup retries |
| Permission denied on a mounted volume | EACCES, permission denied on a file path |
Fix volume ownership: chown -R 1000:1000 /your/host/path |
| Entrypoint script not found or has wrong line endings | exec /app/start.sh: no such file or directory |
Run dos2unix entrypoint.sh to fix Windows line endings; verify the COPY path in Dockerfile |
| Application code error (syntax, import failure) | Full stack trace from your application | Fix the application bug — Docker is reporting it faithfully |
The dependency race condition deserves extra attention because it produces exit code 1 with a misleading log. Your app starts, tries to connect to Postgres, Postgres isn’t ready yet (it’s still initializing), connection refused, app crashes. The fix is not to add a sleep — it’s to make your application retry connections, or to use a proper healthcheck-based dependency in Docker Compose:
services:
app:
depends_on:
db:
condition: service_healthy
db:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
This tells Docker to hold the app container until Postgres passes its healthcheck — not just until the container is started.
Exit Code 137: OOM Kill – Your Container Has No Memory Headroom
Exit code 137 means the Linux kernel’s OOM killer terminated the container’s process. This happens when either a hard memory limit is set and exceeded, or the host itself runs out of memory. Docker did not crash your container — the kernel did.
Confirm OOM kill before chasing memory leaks:
docker inspect <container_name> --format='{{.State.OOMKilled}}'
# Returns: true
If that returns true, the memory limit is the problem. Check what limit is set:
docker inspect <container_name> --format='{{.HostConfig.Memory}}'
# Returns 0 = no limit set (host memory is the ceiling)
# Returns a number in bytes = hard limit enforced by Docker
Three possible causes:
1. A hard memory limit in your compose file is too low for the workload. Common with n8n, databases, or any JVM-based service that needs headroom at startup.
# In docker-compose.yml — raise the limit or remove it for testing
services:
n8n:
deploy:
resources:
limits:
memory: 1G # was 256M, not enough for n8n with active workflows
2. No limit is set but the host is out of memory. Check available memory on the host:
free -h # If available is under 200MB with your stack running, the host is the constraint
3. A genuine memory leak causes the container to grow until killed. This does not usually present as an immediate exit — the container typically runs for minutes or hours first. If the exit is truly immediate (<5 seconds), the limit is almost always the cause, not a leak.
✅ Fast test to rule out the limit
Remove the memory limit entirely and run again. If the container stays up, the limit was the problem. Add it back at a value that leaves headroom for startup spikes — typically 2x the container’s idle memory usage.
The restart: unless-stopped Trap That Hides the Real Problem
This one is particularly dangerous in production because it looks like the container is working.
When you set restart: unless-stopped (or always) in your compose file, Docker automatically restarts a crashed container. If the container exits with code 1 every time it starts, Docker restarts it, it crashes again, Docker restarts it — fast loop. If you run docker ps at just the right moment, the container is “Up for 2 seconds.” A moment later it’s gone again.
The symptom: docker ps shows the container flickering — sometimes up, sometimes exited. The STATUS column might show Restarting (1) 3 seconds ago.
How to catch this:
# Watch the restart count climbing
docker inspect <container_name> --format='{{.RestartCount}}'
# Check how long it has actually been up
docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.RunningFor}}"
A restart count above 3 with an uptime of seconds means you have a crash loop. The restart policy is masking the real problem, not fixing it. Temporarily set the service to restart: "no", run docker compose up in the foreground (without -d), and watch the output directly. The actual error will appear before the container exits.
# Run without detach flag — you will see the crash in real time docker compose up
Do not add restart: unless-stopped to a service that doesn’t start cleanly. Fix the underlying exit code first, confirm the container runs stably, then add the restart policy back.
Running n8n and It Exits Immediately? Three Most Common Causes
n8n exits immediately on first boot or after a config change for three reasons that account for the vast majority of reports:
| Cause | Exit code | What the log says | Fix |
|---|---|---|---|
/home/node/.n8n volume owned by root — n8n runs as UID 1000 and can’t write to it |
1 |
EACCES: permission denied, open '/home/node/.n8n/config' |
chown -R 1000:1000 /your/host/n8n-data then docker compose up -d |
| Postgres service isn’t ready when n8n starts — connection refused on first attempt and n8n doesn’t retry | 1 |
Connection refused or ECONNREFUSED 5432 |
Add depends_on: db: condition: service_healthy with a Postgres healthcheck (see example above) |
| Memory limit set below n8n’s startup footprint — n8n loads all active workflows at startup and can spike above 200MB | 137 |
OOMKilled: true in inspect output |
Remove the memory limit or set it to at least 1G — n8n idles at 150–250MB but spikes higher on boot |
For the Postgres race condition specifically, the minimal compose fix:
services:
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
env_file:
- .env
depends_on:
postgres:
condition: service_healthy
ports:
- "5678:5678"
volumes:
- n8n_data:/home/node/.n8n
postgres:
image: postgres:15
env_file:
- .env
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 5s
timeout: 5s
retries: 10
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
n8n_data:
postgres_data:
This pattern eliminates the race condition entirely. Docker will not start n8n until Postgres responds to pg_isready — which only passes once the database is fully accepting connections, not just when the container is “up.”
Quick Diagnosis Decision Tree
🔍 Start: Run docker ps -a – what does the STATUS column show?
Exited (0)
→ The process finished cleanly — not a crash
→ Check: is your CMD a one-shot script? → expected behavior
→ Check: does your service have a daemon mode? → add the foreground flag
Exited (1)
→ Application error
→ Run docker logs <name> and read the actual error
→ Most common: missing env var, can’t reach DB, volume permission denied
Exited (137)
→ OOM killed
→ Run docker inspect <name> --format='{{.State.OOMKilled}}' → should return true
→ Raise or remove the memory limit, then test again
Restarting (1) N seconds ago
→ Crash loop hidden by restart policy
→ Set restart: "no", run docker compose up without -d
→ Read the crash output directly before fixing
Conclusion
A container that exits immediately is not mysterious – it left a number behind. The exit code is the starting point, not an afterthought. Once you know whether you’re dealing with a clean finish (0), a crash (1), or an OOM kill (137), the fix is usually a single change.
- Always run
docker ps -afirst read the exit code in the STATUS column before changing anything. - Exit code 0 means the process finished: find the foreground flag for your service or confirm it’s a one-shot container.
- Exit code 1 means your application crashed –
docker logs <name>has the real error. Read it. - Exit code 137 means OOM killed: confirm with
docker inspect, then raise the memory limit. - If your container is crash-looping under a restart policy, remove the
-dflag and run in the foreground to see the crash in real time.
For n8n specifically: permission denied on /home/node/.n8n, a Postgres startup race condition, and an undersized memory limit are responsible for nearly every immediate exit report. All three are fixable in under five minutes once you know which one you’re dealing with.

