
Who this is for: Ops engineers and platform developers running n8n in queue mode on Docker, Kubernetes, or bare‑metal. We cover this in detail in the n8n Queue Mode Errors Guide.
Quick Diagnosis
Problem: The n8n master cannot locate any active workers and logs Missing Worker Process.
Solution:
- Verify the queue backend (Redis or RabbitMQ) is reachable.
- Restart the worker service (Docker
docker restart n8n-worker, Kuberneteskubectl rollout restart deployment/n8n-worker, or systemdsystemctl restart n8n‑worker.service). - Ensure
EXECUTIONS_PROCESS=queuein all n8n containers and that a restart policy (unless‑stoppedoron‑failure) is set. - Check worker logs for exit codes; if you see
137(OOM), raise the memory limit.
1. What Triggers “Missing Worker Process” in n8n Queue Mode?
If you encounter any n8n queue mode job stuck resolve them before continuing with the setup.
Quick snapshot of common causes and their log signatures.
| Trigger | Why it Happens | Typical Log Message |
|---|---|---|
| Worker container crashes (OOM, exit code 1) | Insufficient memory / uncaught exception | worker exited with code 1 |
| Docker/K8s restart policy mis‑configured | Supervisor never respawns the worker | RestartPolicy not set – worker not restarted |
| Queue backend (Redis/RabbitMQ) unavailable | Workers can’t pull jobs, they shut down | Failed to connect to Redis at redis://… |
EXECUTIONS_PROCESS mismatch |
Master runs in queue mode, workers run in *main* mode | EXECUTIONS_PROCESS must be "queue" |
| Host‑level limits (ulimit, cgroup) | OS kills the process (SIGKILL) | Killed signal 9 (SIGKILL) |
2. Verify the Queue Backend Is Healthy
Confirm the queue itself is reachable before hunting workers.
Redis health check
# Ping Redis redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" ping
Expected output: PONG
RabbitMQ health check
# Show RabbitMQ status and queue list rabbitmqctl status | grep rabbitmq_queue
| Backend | Healthy Indicator | Failure Indicator |
|---|---|---|
| Redis | PONG response |
Timeout, AUTH error |
| RabbitMQ | running and queue list shows n8n |
Connection refused or missing n8n queue |
EEFA note: A transient Redis outage can cause workers to exit with ECONNREFUSED. Add a retry‑backoff in the worker start‑up script to avoid rapid crash loops.
3. Inspect Worker Logs & Exit Codes
Pull recent logs (Docker, K8s, systemd)
# Docker docker logs n8n-worker-1 --since 10m # Kubernetes kubectl logs -l app=n8n-worker --since=10m # systemd journalctl -u n8n-worker.service -n 100
Common exit‑code meanings
| Exit Code | Meaning | Immediate Action |
|---|---|---|
| 0 | Graceful stop (e.g., docker stop) |
Verify orchestrator intent |
| 1 | Generic error – often uncaught exception | Review stack trace |
| 137 (SIGKILL) | OOM or manual kill | Increase memory limits |
| 139 (SIGSEGV) | Segmentation fault – rare, binary corruption | Re‑install n8n binary |
| 255 | Custom worker‑script abort | Review custom code |
EEFA warning: Restarting a worker that exited with 137 without adjusting memory will cause a repeat crash, starving the queue. If you encounter any n8n queue mode duplicate execution resolve them before continuing with the setup.
4. Restart Missing Workers: Step‑by‑Step
4.1 Docker‑Compose (most common)
Ensure the worker service has a proper restart policy.
Service definition (excerpt)
services:
n8n:
image: n8nio/n8n
environment:
- EXECUTIONS_PROCESS=queue
- QUEUE_BULL_REDIS_HOST=redis
Worker definition (excerpt)
n8n-worker:
image: n8nio/n8n
environment:
- EXECUTIONS_PROCESS=queue
- QUEUE_BULL_REDIS_HOST=redis
restart: unless-stopped # critical
Restart command
docker-compose up -d n8n-worker
4.2 Kubernetes (Deployment)
*Deploy three replicas with a liveness probe that restarts dead workers.*
Deployment skeleton
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n-worker
spec:
replicas: 3
selector:
matchLabels:
app: n8n-worker
Container spec (partial)
template:
metadata:
labels:
app: n8n-worker
spec:
containers:
- name: n8n-worker
image: n8nio/n8n
env:
- name: EXECUTIONS_PROCESS
value: "queue"
resources:
limits:
memory: "512Mi"
Liveness probe
livenessProbe:
exec:
command: ["node", "dist/worker.js"]
initialDelaySeconds: 15
periodSeconds: 30
Restart command
kubectl rollout restart deployment/n8n-worker
4.3 systemd (bare‑metal)
Create a persistent service that restarts automatically.
Service file (/etc/systemd/system/n8n-worker.service)
[Unit] Description=n8n Queue Worker After=network.target redis.service StartLimitIntervalSec=0 [Service] Environment=EXECUTIONS_PROCESS=queue ExecStart=/usr/local/bin/n8n start --worker Restart=always RestartSec=5 LimitNOFILE=65535 [Install] WantedBy=multi-user.target
Enable & start
systemctl daemon-reload systemctl enable --now n8n-worker.service
EEFA tip: RestartSec=5 gives dependent services (e.g., Redis) time to recover after a network glitch.
5. Automate Detection & Self‑Healing
5.1 Bash health‑check script (run as a daemon)
*Continuously verifies Redis and worker registration, restarting the worker when missing.*
#!/usr/bin/env bash
REDIS_HOST=${REDIS_HOST:-redis}
WORKER_NAME=${WORKER_NAME:-n8n-worker}
MAX_RESTARTS=3
COUNT=0
while true; do
# 1️⃣ Verify Redis
if ! redis-cli -h "$REDIS_HOST" ping | grep -q PONG; then
echo "$(date) – Redis down, sleeping 30s"
sleep 30
continue
fi
# 2️⃣ Check worker registration via n8n API
if curl -s "http://localhost:5678/api/v1/queue/worker" | grep -q "$WORKER_NAME"; then
echo "$(date) – Worker healthy"
COUNT=0
else
echo "$(date) – Worker missing, restarting..."
docker restart "$WORKER_NAME"
((COUNT++))
if ((COUNT>=MAX_RESTARTS)); then
echo "$(date) – Too many restarts, alerting..."
# Insert alert hook (Slack, PagerDuty, etc.)
exit 1
fi
fi
sleep 15
done
*Schedule the script with systemd or Cron to run continuously.*
5.2 Docker restart policy (fallback)
restart: unless-stopped # already shown, but emphasize
Docker will auto‑restart the worker on unexpected exits, protecting the queue from starvation.
6. Troubleshooting Checklist
| Check | How to Verify | Remedy if Failing |
|---|---|---|
| Queue backend reachable | redis-cli ping / rabbitmqctl status |
Fix network, credentials, or service health |
| Worker container running | docker ps -f name=n8n-worker |
docker start n8n-worker |
EXECUTIONS_PROCESS set to queue |
docker exec n8n-worker env | grep EXECUTIONS_PROCESS |
Add EXECUTIONS_PROCESS=queue to all services |
| Memory limits adequate | docker stats or kubectl top pod |
Raise memory limit, enable swap only if safe |
| Log shows successful registration | Search for Worker registered in logs |
Re‑deploy worker image, ensure version parity |
| Liveness/Readiness probes passing (K8s) | kubectl get pods → READY column |
Adjust probe command or delay |
| Restart policy active | docker inspect --format='{{.HostConfig.RestartPolicy.Name}}' n8n-worker |
Set to unless-stopped or on-failure |
Mark each item as you verify; a single missed check often explains recurring “missing worker” alerts.
7. Preventive Best Practices
- Separate worker and master images – lock versions independently to avoid accidental downgrade.
- Enable persistent queue storage – e.g., Redis
appendonly yesto survive restarts without job loss. - Monitor queue depth – alert when
queueLength > 500to pre‑empt worker saturation. - Graceful shutdown hooks – handle
SIGTERMinworker.jsso in‑flight jobs finish before exit. - Run a “heartbeat” sidecar – a lightweight script that pings the master every 10 seconds; if missed, it triggers a restart.
All commands assume default n8n ports and environment variable names; adapt to your deployment.



