Who this is for: Ops engineers, DevOps teams, and n8n administrators who run scheduled workflows or webhook integrations in production. We cover this in detail in the n8n Architectural Failure Modes Guide.
Quick Diagnosis
Problem: n8n’s scheduled workflows run early, late, or skip entirely because the server’s clock has drifted away from real time.
Quick fix:
- Verify the host’s time (must be UTC and within 2 seconds).
- Restart the host’s NTP service.
- Restart the n8n container.
It often appears after a VM pause or a brief network hiccup that stalls NTP.
If the drift reappears, follow the detailed steps below.
1. Why Clock Sync Matters for n8n?
If you encounter any n8n failures under network partitions resolve them before continuing with the setup.
Accurate time underpins all time‑sensitive features in n8n.
| n8n Feature | Dependency on Accurate Time |
|---|---|
| Cron / Schedule Trigger | Executes when now >= nextTrigger. |
Webhooks with expiresAt |
Validates request timestamps. |
| Credential OAuth token refresh | Calculates expires_in. |
| Execution logs & audit | Stores ISO‑8601 timestamps. |
| External APIs (e.g., Stripe, PayPal) | Require request timestamps within ±5 s. |
When the OS clock drifts, each component inherits the error, leading to missed runs, duplicate executions, or authentication failures.
2. Typical Symptoms of n8n Time‑Drift
If you encounter any n8n behavior during cloud outages resolve them before continuing with the setup.
Spot the warning signs before they break your pipelines.
| Symptom | UI / Log Appearance | Likely Impact |
|---|---|---|
| Scheduled workflow fires early | Timestamp earlier than the cron expression (e.g., 0 12 * * * runs at 11:58) |
Duplicate data, race conditions |
| Scheduled workflow fires late / skips | No execution entry for the expected run; next run appears minutes/hours later | Missed deadlines, SLA breach |
| Webhook “Signature expired” | Error: Request timestamp is outside the allowed window |
Incoming integrations break |
| OAuth token refresh loop | Repeated Token refresh failed errors despite valid credentials |
API calls fail, downstream steps error |
| Inconsistent audit timestamps | CreatedAt and FinishedAt fields out of order |
Debugging becomes impossible |
If you see any of these, start with the Quick Diagnosis checklist.
3. Root Causes of Clock Drift in n8n Deployments
If you encounter any n8n retry logic financial workflows resolve them before continuing with the setup.
| Source | Description | Production‑grade Warning |
|---|---|---|
| Host OS without NTP | No daemon to keep system clock aligned | Critical – drift can accumulate 1 s/min on idle VMs |
| Docker containers using host time | Containers inherit host clock; if host drifts, all containers drift | High – isolated containers do not self‑correct |
| CPU throttling / cgroup limits | Heavy CPU limits can cause the kernel clock to lag | Medium – only in highly constrained environments |
| Timezone mis‑configuration | n8n runs in UTC, but host set to local TZ, causing conversion errors | Low – usually harmless but can confuse logs |
| Database server time mismatch | DB clock differs from n8n host | High – execution timestamps stored in DB become inconsistent |
| Virtualized cloud instances | Some cloud VMs have coarse time sync | Medium – enable chrony or ntp in the VM |
4. Step‑by‑Step Diagnosis & Fixes
4.1 Verify Host Time & NTP Status
Run these commands on the host to see the current UTC time and NTP health:
# Show current UTC time date -u
# Check chrony synchronization status chronyc tracking
# If you use ntp instead of chrony ntpq -p
On production servers, enforce chrony with maxdelay 0.5 to guarantee sub‑second accuracy.
If the offset is > 2 seconds, restart the NTP daemon:
# For chrony sudo systemctl restart chronyd
# For ntp sudo systemctl restart ntp
4.2 Confirm Docker Container Time
Check that the n8n container sees the same UTC time as the host:
docker exec -it <n8n_container> date -u
If the container’s time differs, the host is still off or the container isn’t sharing the host’s timezone files. Without those mounts the container falls back to its own clock, causing the mismatch.
Fix the mount configuration (docker‑compose excerpt):
services:
n8n:
image: n8nio/n8n
volumes:
- /etc/localtime:/etc/localtime:ro
services:
n8n:
volumes:
- /etc/timezone:/etc/timezone:ro
These mounts ensure the container uses the host’s clock and timezone.
4.3 Check Database Server Clock
Run the appropriate query on your DB server:
-- Postgres SELECT now() AT TIME ZONE 'UTC';
-- MySQL SELECT UTC_TIMESTAMP();
If the DB time diverges, sync the DB host with the same NTP method you used for the n8n host.
4.4 Align n8n’s Internal Timezone
Set the environment variable to force UTC throughout n8n:
# In .env or docker‑compose N8N_TIMEZONE=UTC
This eliminates any surprises caused by local‑timezone conversions.
4.5 Re‑deploy n8n After Sync
Pull the latest image and recreate the container so it reads the corrected clock:
docker compose pull n8n
docker compose up -d --force-recreate n8n
A clean redeploy is usually faster than hunting for hidden timezone caches. A fresh start guarantees the updated system time is applied.
5. Preventive Monitoring & Alerts
Proactively catch drift before it hurts your workflows.
| Tool | Metric / Check | Alert Threshold | How to Implement |
|---|---|---|---|
| Prometheus | node_timex_offset_seconds (via node_exporter) |
Offset > 0.5 s | Add timex collector to node_exporter. |
| Grafana | “Clock Drift” panel (graph of offset over time) | Persistent drift > 2 s | Create alert channel to Slack/Email. |
| Health‑check endpoint | Custom script returning clockSync: true/false |
false → repeat every 5 min |
Add a tiny Node.js script inside the n8n container. |
| Kubernetes | Liveness probe that checks date -u against NTP |
Failure → pod restart | Use a sidecar container that runs the check. |
In a high‑throughput cluster, run a dedicated “time‑sync sidecar” that constantly monitors offset and can automatically restart the n8n pod if drift exceeds 1 s.
6. Advanced Troubleshooting Scenarios
6.1 Intermittent Drift Only During Heavy Load
Root cause: CPU throttling under cgroup limits.
Fix: Give the container more CPU resources:
services:
n8n:
deploy:
resources:
limits:
cpus: "2.0"
6.2 Drift After System Sleep / Hibernation
Root cause: Host resumes without re‑syncing NTP.
Fix: Create a systemd service that forces a time step on resume:
# /etc/systemd/system/chrony-resume.service [Unit] Description=Force chrony to correct time after resume After=suspend.target
[Service] Type=oneshot ExecStart=/usr/bin/chronyc makestep
[Install] WantedBy=suspend.target
Enable it with systemctl enable chrony-resume.service.
6.3 Webhook “Signature expired” Only for Specific Providers
Root cause: Provider (e.g., Stripe) uses a stricter 5‑second window, while n8n’s clock is a few seconds ahead.
Fix: Ensure the request timestamp comes from a proxy that is NTP‑synced. Set the tunnel URL accordingly:
# Example environment variable N8N_WEBHOOK_TUNNEL_URL=https://my‑proxy.example.com
The proxy adds X-Forwarded-Proto and its own timestamp, keeping the validation window happy.
7. Fix n8n Clock Drift in Minutes: CheckList
| Step | Action |
|---|---|
| 1 | date -u – confirm host UTC ≤ 2 s offset |
| 2 | Restart NTP (systemctl restart chronyd or ntp) |
| 3 | Verify container time: docker exec <c> date -u |
| 4 | Mount host timezone files into container (see docker‑compose snippets) |
| 5 | Ensure DB server time matches host |
| 6 | Set N8N_TIMEZONE=UTC in env |
| 7 | Re‑deploy n8n container (docker compose pull && up -d --force-recreate) |
| 8 | Add Prometheus node_timex_offset_seconds alert (< 0.5 s) |
| 9 | Test a cron workflow and a webhook to confirm normal timing |



