
Step by Step Guide to Fix n8n webhook payload too large Error
Who this is for: Developers and DevOps engineers running n8n in Docker or behind a reverse‑proxy who need to accept large JSON or file payloads via webhooks. We cover this in detail in the n8n Webhook Errors.
Quick Diagnosis
The error occurs when a request exceeds n8n’s default 10 MB body limit. Raise the limit with the N8N_MAX_PAYLOAD_SIZE environment variable and, if you use a reverse‑proxy, increase its body‑size setting (e.g., client_max_body_size in Nginx). Restart n8n and the proxy, then re‑trigger the webhook.
1. Why n8n Rejects Large Webhook Payloads
If you encounter any debugging with logs resolve them before continuing with the setup.
| Root Cause | Where It Happens | Default Limit |
|---|---|---|
| n8n core – Express JSON parser | Inside n8n runtime | 10 MB (N8N_MAX_PAYLOAD_SIZE=10mb) |
| Reverse proxy – Nginx/Traefik/Apache | Front‑end web server | Varies (Nginx: 1 M by default) |
| Container runtime – Docker/Kubernetes limits | Docker daemon or pod spec | Usually none, but may be constrained by --memory/--ulimit |
If any layer blocks the request before it reaches n8n, you’ll see an HTTP 413 “payload too large” response.
2. Raising n8n’s Internal Payload Limit
If you encounter any migration and version changes resolve them before continuing with the setup.
2.1 Using Environment Variables (Docker‑Compose)
Define the limit in docker‑compose.yml:
services:
n8n:
image: n8nio/n8n
Add the environment and port mapping:
environment:
- N8N_MAX_PAYLOAD_SIZE=50mb # increase to 50 MB (max 2 GB)
- N8N_HOST=0.0.0.0
ports:
- "5678:5678"
EEFA: Setting the limit too high (e.g., > 500 MB) can cause memory‑exhaustion attacks. Keep it as low as your use‑case requires and monitor container RAM usage.
2.2 Using Docker CLI
docker run -d \ -p 5678:5678 \ -e N8N_MAX_PAYLOAD_SIZE=50mb \ n8nio/n8n
2.3 Using a .env File (Standalone n8n)
Create a .env next to the binary:
N8N_MAX_PAYLOAD_SIZE=25mb
Start n8n; the file is automatically loaded:
n8n start
2.4 Verifying the New Limit
curl -X POST http://localhost:5678/webhook-test \
-H "Content-Type: application/json" \
--data-binary @large-payload.json \
-w "\nStatus:%{http_code}\n"
A 200 (or your workflow’s expected code) confirms the limit is sufficient.
3. Adjusting the Reverse Proxy
The proxy must allow at least the same size as N8N_MAX_PAYLOAD_SIZE.
3.1 Nginx
Create a dedicated server block:
server {
listen 80;
server_name webhook.example.com;
Set the body‑size and proxy basics:
client_max_body_size 60M; # >= N8N_MAX_PAYLOAD_SIZE
proxy_set_header Host $host;
proxy_pass http://localhost:5678;
}
Reload the configuration:
sudo nginx -s reload
3.2 Traefik (Docker Labels)
Add buffering middleware to the service definition:
services:
n8n:
image: n8nio/n8n
labels:
- "traefik.http.routers.n8n.rule=Host(`webhook.example.com`)"
Configure the max request body size:
- "traefik.http.middlewares.n8n-buffer.buffering.maxRequestBodyBytes=62914560" # 60 MB
- "traefik.http.routers.n8n.middlewares=n8n-buffer"
EEFA: Traefik stores the body in memory; for > 100 MB payloads consider a streaming approach (e.g., upload to S3 first).
3.3 Apache
Add a limit directive in the virtual host or .htaccess:
LimitRequestBody 62914560 # 60 MB
4. Best Practices for Very Large Files
| Strategy | When to Use | How to Implement |
|---|---|---|
| Chunked upload to S3 / Azure Blob | Files > 100 MB | Use a pre‑signed URL, then send the URL in a lightweight webhook payload. |
| Base64‑encoded data | Small binary payloads (< 5 MB) | Encode client‑side, decode inside the workflow with a “Set” node ($json["data"]). |
| Streaming via HTTP‑GET | Source can serve a public URL | Send only the URL; downstream nodes fetch the file with the “HTTP Request” node. |
EEFA: Never store raw binary data directly in n8n’s execution context for production workloads; it inflates the internal SQLite DB and can hit the 2 GB SQLite file size limit.
5. Troubleshooting Checklist
| Check | How to Verify |
|---|---|
| n8n limit | echo $N8N_MAX_PAYLOAD_SIZE inside the container; confirm it matches your .env or Docker‑compose value. |
| Proxy limit | nginx -T | grep client_max_body_size or inspect the Traefik dashboard. |
| Actual request size | curl -I -X POST … --data-binary @file.json → look at Content‑Length. |
| Memory pressure | docker stats n8n – spikes > 80 % of allocated RAM indicate the limit is too high. |
| Error logs | docker logs n8n – search for PayloadTooLargeError or 413 Request Entity Too Large. |
| Network timeout | Ensure proxy timeout (proxy_read_timeout in Nginx) exceeds the upload duration. |
Adjust the failing component and repeat the verification step.
6. Production‑Ready Configuration Example
Docker‑Compose (production)
version: "3.8"
services:
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
environment:
- N8N_HOST=0.0.0.0
- N8N_PORT=5678
- N8N_MAX_PAYLOAD_SIZE=50mb
- NODE_ENV=production
ports:
- "5678:5678"
volumes:
- n8n-data:/home/node/.n8n
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5678/healthz"]
interval: 30s
timeout: 10s
retries: 3
volumes:
n8n-data:
Nginx (production)
server {
listen 80;
server_name webhook.example.com;
client_max_body_size 55M; # Slightly above n8n limit
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://n8n:5678;
}
}
EEFA: The proxy_*_timeout values prevent 504 errors for long uploads. Adjust according to your network bandwidth.
7. When to Prefer an Alternative Approach
- Payload > 200 MB – Even with increased limits, memory usage spikes. Use a cloud‑storage pre‑signed URL pattern.
- Frequent large uploads – Off‑load to a dedicated ingestion service (e.g., MinIO) and let n8n process only metadata.
- Compliance requirements – Store raw files outside n8n to keep audit logs separate.
Conclusion
Increasing N8N_MAX_PAYLOAD_SIZE and aligning the reverse‑proxy’s body‑size limit resolves the “payload too large” error while keeping the system stable. Monitor memory usage, avoid storing huge binaries directly in n8n, and consider off‑loading very large files to object storage. These steps ensure reliable webhook handling in production environments without sacrificing performance or security.



