
Who this is for: Developers and DevOps engineers running n8n in Queue Mode behind corporate HTTP proxies. We cover this in detail in the n8n Queue Mode Errors Guide.
Quick Diagnosis
Symptom: n8n starts in Queue Mode but logs Proxy configuration error – unable to reach queue endpoint (or similar).
Root cause: The HTTP proxy defined in the environment intercepts or rewrites the internal queue traffic (Redis, RabbitMQ, or the built‑in memory queue), preventing n8n from publishing or consuming jobs.
Featured‑snippet solution
- Remove or correct the
HTTP_PROXY/HTTPS_PROXYvariables for the queue host. - If a proxy is required for outbound internet calls, set
NO_PROXY(orno_proxy) to include the queue host (e.g.,NO_PROXY=localhost,127.0.0.1,queue.my‑domain.com). - Restart n8n and verify the queue health endpoint (
/health/queue) returns 200 OK.
1. What Triggers the Proxy Configuration Error in n8n Queue Mode?
If you encounter any n8n queue mode environment variable missing resolve them before continuing with the setup.
| Trigger | Typical Log Message | Why It Happens |
|---|---|---|
Global HTTP_PROXY/HTTPS_PROXY env vars point to a corporate forward proxy |
Proxy configuration error – unable to reach queue endpoint | n8n routes all outbound HTTP traffic—including internal queue connections—through the proxy, which refuses or rewrites the request. |
Missing NO_PROXY entry for the queue service |
Same as above | The proxy treats the queue’s hostname/IP as an external address and attempts to forward it, causing connection refusal. |
| Proxy authentication mismatch (username/password) | Proxy authentication failed | Queue traffic is sent with wrong credentials; the proxy drops the request. |
| Docker/Kubernetes sidecar proxy interfering with internal network | Error: connect ECONNREFUSED | Container network namespace routes internal traffic through the sidecar’s proxy. |
Key takeaway: Queue Mode expects direct, unauthenticated communication with the queue backend. Any proxy that intercepts this traffic must be explicitly bypassed.
2. Prerequisite Checklist: Verify Your Environment
| Item | Recommended Setting |
|---|---|
| n8n version | ≥ 1.0.0 (latest LTS) |
| Queue backend | Redis 6+, RabbitMQ 3.8+, or built‑in memory |
| Proxy variables | HTTP_PROXY, HTTPS_PROXY **only** for outbound internet calls |
NO_PROXY / no_proxy |
Include queue host (e.g., queue.my‑domain.com,127.0.0.1) |
| Container runtime | Docker 20+, Kubernetes 1.22+ |
EEFA note – Older n8n releases hard‑code proxy handling that cannot be overridden.
3. Step‑by‑Step Fix – Adjust Proxy Settings for Queue Mode
3.1. Inspect Current Proxy Environment
Purpose: Confirm which proxy variables are currently exported inside the n8n container or host.
# List all proxy‑related environment variables printenv | grep -i proxy # Example output that causes the error # HTTP_PROXY=http://proxy.corp.com:3128 # HTTPS_PROXY=http://proxy.corp.com:3128 # NO_PROXY=
3.2. Update .env (or Docker‑compose) to Bypass the Queue Host
Option A – Direct edit of .env
# Existing proxy for external APIs HTTP_PROXY=http://proxy.corp.com:3128 HTTPS_PROXY=http://proxy.corp.com:3128 # Add the queue host to NO_PROXY (no trailing comma!) NO_PROXY=localhost,127.0.0.1,queue.my-domain.com
Option B – Docker‑Compose override
services:
n8n:
image: n8nio/n8n
environment:
- HTTP_PROXY=http://proxy.corp.com:3128
- HTTPS_PROXY=http://proxy.corp.com:3128
- NO_PROXY=localhost,127.0.0.1,queue.my-domain.com
# other config …
EEFA: Do not add a trailing comma in NO_PROXY; some DNS resolvers treat it as an empty host and break the bypass.
3.3. Restart n8n and Verify Queue Health
Purpose: Apply the new environment and confirm the queue backend is reachable.
# Docker Compose docker compose up -d n8n # Kubernetes kubectl rollout restart deployment/n8n
Now query the health endpoint:
# Request the queue health check curl -s http://localhost:5678/health/queue | jq .
Expected JSON response:
{
"queue": "ok",
"backend": "redis",
"latencyMs": 12
}
If the response is not 200 OK, continue to the advanced section.
4. Advanced: Deploy a Dedicated Queue Proxy (Nginx or Traefik)
When corporate policy forces all outbound traffic through a forward proxy, you can isolate queue traffic by front‑ending the queue with a lightweight reverse proxy that **does not** forward to the corporate proxy. If you encounter any n8n queue mode incorrect environment mode resolve them before continuing with the setup.
| Feature | Nginx | Traefik |
|---|---|---|
| Configuration file size | Small, static | Dynamic via labels |
| TLS termination | Simple proxy_pass |
Built‑in ACME |
| Health checks | proxy_next_upstream |
healthcheck.path |
| Docker support | nginx:alpine |
traefik:latest |
4.1. Nginx Example (Docker)
Purpose: Expose Redis on port 6379 without invoking the corporate proxy.
version: "3.8"
services:
queue-proxy:
image: nginx:alpine
ports:
- "6379:6379"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
restart: unless-stopped
n8n:
image: n8nio/n8n
environment:
- N8N_QUEUE_BROKER=redis://queue-proxy:6379
- HTTP_PROXY=http://proxy.corp.com:3128
- HTTPS_PROXY=http://proxy.corp.com:3128
- NO_PROXY=localhost,127.0.0.1,queue-proxy
depends_on:
- queue-proxy
`nginx.conf` (stream module forwards to the real Redis service):
events {}
stream {
upstream redis_backend {
server redis:6379;
}
server {
listen 6379;
proxy_pass redis_backend;
# Ensure Nginx does not forward through any external proxy
proxy_bind $remote_addr;
}
}
4.2. Traefik Example (Kubernetes)
Purpose: Provide a TCP entry point for Redis that bypasses side‑car proxies.
apiVersion: v1
kind: Service
metadata:
name: queue-proxy
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
---
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: queue-proxy-ingress
spec:
entryPoints:
- redis
routes:
- match: HostSNI(`*`)
services:
- name: queue-proxy
port: 6379
EEFA: Ensure the Traefik entry point redis is **not** configured with forwardedHeaders.trustedIPs that point to the corporate proxy; otherwise the same error reappears.
5. Common Pitfalls & EEFA (Error‑Free Assurance) Notes
| Pitfall | Why It Happens | Fix |
|---|---|---|
Adding the queue host to NO_PROXY but forgetting the port (e.g., queue.my-domain.com:6379) |
NO_PROXY matches only the hostname; extra characters break the match. |
Use the hostname alone: NO_PROXY=queue.my-domain.com. |
Using uppercase NO_PROXY in some containers while the runtime expects lowercase no_proxy. |
Variable name mismatch. | Set both: NO_PROXY=… and no_proxy=…. |
Proxy authentication credentials contain special characters (@, :) and are not URL‑encoded. |
The proxy cannot parse the credentials. | URL‑encode them, e.g., http://user%40domain:pa%3Ass@proxy.corp.com:3128. |
Running n8n behind a **sidecar** (e.g., Istio) that automatically injects HTTP_PROXY into every pod. |
Sidecar forces proxy usage for all outbound traffic. |
metadata:
annotations:
sidecar.istio.io/inject: "false"
|
| Forgetting to restart **all** containers after env var changes (Docker Compose may only restart the changed service). | Old containers still use stale variables. | Run docker compose down && docker compose up -d or kubectl rollout restart deployment/n8n for the whole deployment. |
6. Validation Checklist: Confirm the Queue Is Operational
| Check | Command / Action | Expected Result |
|---|---|---|
| Proxy vars correctly set | printenv | grep -i proxy |
NO_PROXY includes the queue host |
| Queue health endpoint reachable | curl -s http://localhost:5678/health/queue |
JSON with "queue":"ok" |
| n8n logs show no proxy errors | docker logs n8n | grep "Proxy" |
No lines containing Proxy configuration error |
| Jobs are processed | Trigger a simple workflow that queues a task; watch the **Execution** list | Execution status changes to **Success** within seconds |
| No sidecar proxy interference | kubectl describe pod n8n | grep sidecar |
No sidecar containers listed |
7. Persistent Issues: Debugging Tips & Log Locations
- n8n logs –
docker logs n8norkubectl logs deployment/n8n. Look forERR_PROXY,ECONNREFUSED, or queue‑initialization messages. - Queue backend logs – Redis (
docker logs redis) or RabbitMQ (docker logs rabbitmq). Verify connection attempts from the n8n container IP. - Network trace – Capture traffic inside the n8n container to ensure it bypasses the proxy:
docker exec -it n8n apk add tcpdump docker exec -it n8n tcpdump -i any port 6379 -c 5
- Environment diff – Compare a working instance vs. the failing one:
diff <(docker exec good-n8n printenv) <(docker exec bad-n8n printenv) | grep -i proxy
If the error persists after these steps, open an issue on the n8n GitHub with relevant logs and proxy configuration details.
Conclusion
By correctly scoping proxy variables and explicitly bypassing the queue host with NO_PROXY (or by isolating the queue behind a dedicated reverse proxy), you eliminate the “Proxy configuration error” that blocks n8n’s Queue Mode. The steps above provide a reproducible, production‑ready fix that restores direct, low‑latency communication with Redis, RabbitMQ, or the built‑in memory queue, ensuring reliable job processing in real‑world deployments.



