Tutorial

Nginx Reverse Proxy Setup: Route Traffic to Backend Applications

March 28, 2026

Back to Blog

A reverse proxy sits between clients and your backend servers, forwarding requests and returning responses on behalf of the backend. Nginx excels in this role thanks to its event-driven architecture, minimal memory footprint, and ability to handle thousands of concurrent connections. Whether you are running a Node.js API, a Python Flask application, a Docker container, or a Java microservice, Nginx as a reverse proxy provides SSL termination, load balancing, caching, and a unified entry point for all your services.

This guide covers everything from basic proxy_pass configuration to advanced setups including WebSocket support, load balancing, and caching strategies.

How a Reverse Proxy Works

Client Browser
Nginx (Port 443)
Backend (Port 3000)

Without a reverse proxy, your backend application binds directly to port 80 or 443 and handles everything: SSL, static files, request parsing, and application logic. With Nginx in front, the responsibilities are separated. Nginx handles SSL termination, serves static files, manages connections, and forwards dynamic requests to your backend. Your application focuses solely on business logic.

Without Reverse Proxy

  • Backend handles SSL certificates directly
  • Backend serves static files (inefficient)
  • One backend = one domain
  • No connection buffering
  • Need to run as root for port 80/443

With Nginx Reverse Proxy

  • Nginx terminates SSL (backend sees plain HTTP)
  • Nginx serves static files natively (fast)
  • Multiple backends behind one IP
  • Connection buffering protects backends
  • Backend runs on unprivileged port

Basic Reverse Proxy Configuration

The simplest reverse proxy configuration forwards all requests to a backend application running on localhost.

server {
listen 80;
server_name app.example.com;

location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

This forwards all traffic for app.example.com to a backend running on port 3000. But the proxy headers are what make this configuration production-ready. Without them, your backend would see every request coming from 127.0.0.1 instead of the actual client IP.

Essential Proxy Headers

Proxy headers ensure your backend application receives accurate information about the original client request. Without these, logging, rate limiting, and geolocation all break.

HeaderPurposeNginx Variable
HostThe original hostname the client requested$host
X-Real-IPThe actual client IP address$remote_addr
X-Forwarded-ForChain of proxies the request passed through$proxy_add_x_forwarded_for
X-Forwarded-ProtoOriginal protocol (http or https)$scheme
X-Forwarded-HostOriginal Host header before proxying$host
X-Forwarded-PortOriginal port the client connected to$server_port
Security Warning: Trust Proxy Headers Carefully
Never blindly trust X-Forwarded-For or X-Real-IP headers from untrusted sources. A malicious client can send fake headers to bypass IP-based restrictions. In your backend application, configure the trusted proxy list (in Express.js: app.set('trust proxy', 'loopback'); in Laravel: TrustProxies middleware). Only trust headers from known proxy IPs.

Reverse Proxy with SSL Termination

SSL termination means Nginx handles the HTTPS encryption/decryption, and communicates with the backend over plain HTTP. This offloads the cryptographic work from your application and centralizes certificate management.

server {
listen 80;
server_name app.example.com;
return 301 https://$server_name$request_uri;
}

server {
listen 443 ssl http2;
server_name app.example.com;

ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;

# Security headers
add_header Strict-Transport-Security "max-age=63072000" always;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;

location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Why X-Forwarded-Proto Matters
When Nginx terminates SSL and forwards the request over HTTP, your backend sees the request as plain HTTP. Without X-Forwarded-Proto: https, your application may generate HTTP links instead of HTTPS links, cause redirect loops, or flag the connection as insecure. Most frameworks (Express, Django, Laravel, Rails) check this header to determine the original protocol.

WebSocket Support

WebSocket connections start as HTTP and then "upgrade" to a persistent bidirectional connection. Nginx needs specific configuration to handle this upgrade properly, because by default it does not forward the Upgrade and Connection headers.

# Define map for WebSocket upgrade handling
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}

server {
listen 443 ssl http2;
server_name app.example.com;

# ... SSL config ...

location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# Increase timeouts for long-lived WebSocket connections
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
Three Essential WebSocket Settings
1. proxy_http_version 1.1 — WebSocket requires HTTP/1.1 (not 1.0, which is Nginx's default for proxied connections).
2. Upgrade and Connection headers — these trigger the protocol switch from HTTP to WebSocket.
3. Extended timeouts — default Nginx proxy timeout is 60 seconds. WebSocket connections are long-lived and will be killed without extended timeouts.

Load Balancing with Upstream

When your application runs on multiple backend servers (or multiple instances), Nginx can distribute traffic across them using an upstream block.

upstream backend_cluster {
# Load balancing method (default: round-robin)
least_conn; # Send to server with fewest active connections

server 127.0.0.1:3001 weight=3; # Gets 3x more traffic
server 127.0.0.1:3002 weight=1;
server 127.0.0.1:3003 weight=1;
server 127.0.0.1:3004 backup; # Only used if others are down

# Health check parameters
keepalive 32; # Maintain persistent connections to backends
}

server {
listen 443 ssl http2;
server_name api.example.com;

location / {
proxy_pass http://backend_cluster;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
MethodBehaviorBest For
round-robin (default)Rotates through servers sequentiallyEqual-capacity servers
least_connSends to server with fewest active connectionsVarying request duration
ip_hashSame client IP always goes to same serverSession persistence
hashCustom hash key for routing decisionsCache optimization
random two least_connPicks two random servers, sends to the least busyLarge clusters

Proxy Caching

Nginx can cache responses from your backend, dramatically reducing load and improving response times for frequently requested content.

# Define cache zone in http {} block
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m
max_size=1g inactive=60m use_temp_path=off;

server {
listen 443 ssl http2;
server_name api.example.com;

location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_cache api_cache;
proxy_cache_valid 200 10m; # Cache 200 responses for 10 min
proxy_cache_valid 404 1m; # Cache 404 responses for 1 min
proxy_cache_use_stale error timeout updating;
proxy_cache_lock on; # Prevent thundering herd
add_header X-Cache-Status $upstream_cache_status;
}

# Skip cache for authenticated requests
location /api/user/ {
proxy_pass http://127.0.0.1:3000;
proxy_cache off;
}
}
X-Cache-Status Header
Adding $upstream_cache_status as a response header lets you debug caching behavior. Possible values: HIT (served from cache), MISS (fetched from backend), EXPIRED (cache entry expired), BYPASS (cache was skipped), UPDATING (stale content served while updating). Check this header in your browser DevTools to verify caching is working.

Rate Limiting

Protect your backend from abuse and DDoS attacks with Nginx rate limiting.

# Define rate limit zone in http {} block
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;

server {
# General API: 10 requests/second per IP with burst
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://127.0.0.1:3000;
}

# Login endpoint: 1 request/second per IP (strict)
location /api/auth/login {
limit_req zone=login_limit burst=5;
proxy_pass http://127.0.0.1:3000;
}
}

Proxying to Docker Containers

Docker containers typically expose services on high-numbered ports. Nginx makes it seamless to route traffic to containers using domain names.

# Docker container running on port 8080
server {
listen 443 ssl http2;
server_name myapp.example.com;

location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

# Multiple containers on different ports
server {
listen 443 ssl http2;
server_name grafana.example.com;
location / { proxy_pass http://127.0.0.1:3000; ... }
}

server {
listen 443 ssl http2;
server_name gitlab.example.com;
location / { proxy_pass http://127.0.0.1:8929; ... }
}

Common Mistakes and Fixes

MistakeSymptomFix
Missing trailing slash in proxy_passPath duplication (e.g., /api/api/users)Match the trailing slash in location and proxy_pass consistently
No proxy headersBackend logs show 127.0.0.1 for all requestsAdd X-Real-IP and X-Forwarded-For headers
Default proxy timeout (60s)Long requests get 504 Gateway TimeoutIncrease proxy_read_timeout and proxy_send_timeout
HTTP/1.0 to backendWebSocket connections failSet proxy_http_version 1.1
Large request body rejected413 Request Entity Too LargeIncrease client_max_body_size
Buffering too smallSlow response for large payloadsTune proxy_buffer_size and proxy_buffers
Trailing Slash Trap
proxy_pass http://127.0.0.1:3000; (no trailing slash) forwards the full original URI. proxy_pass http://127.0.0.1:3000/; (with trailing slash) strips the matched location prefix. For location /api/, a request to /api/users becomes /users with the trailing slash, or stays /api/users without. Choose based on how your backend expects paths.

Testing and Verification

# Test Nginx configuration syntax
$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

# Reload without downtime
$ sudo nginx -s reload

# Test the proxy with curl
$ curl -I https://app.example.com
HTTP/2 200
server: nginx
x-powered-by: Express
x-cache-status: MISS

# Verify WebSocket upgrade
$ curl -i -N \
-H "Connection: Upgrade" \
-H "Upgrade: websocket" \
-H "Sec-WebSocket-Version: 13" \
-H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" \
https://app.example.com/ws
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade

Automatic Reverse Proxy with Panelica

Writing Nginx reverse proxy configurations by hand is error-prone, especially when managing multiple backend applications with different requirements. Panelica automates this entire process through its domain management interface.

Auto Config
Set domain mode to Reverse Proxy and enter a backend port
Full Stack
Nginx config, SSL certificate, and WebSocket headers included

When you set a domain's web server mode to "Reverse Proxy" in Panelica, it automatically generates the complete Nginx configuration — including proxy headers, WebSocket support, SSL termination with automatic Let's Encrypt certificates, and security headers. Point any domain to a backend port and Panelica handles the rest. This is particularly useful for proxying to Docker containers or Node.js applications running on custom ports.

Key Takeaway
Nginx as a reverse proxy is one of the most versatile tools in your infrastructure. It provides SSL termination, load balancing, WebSocket support, caching, and rate limiting — all with minimal resource overhead. Master the essential headers (Host, X-Real-IP, X-Forwarded-For, X-Forwarded-Proto), understand the trailing slash behavior, and always test with nginx -t before reloading. Whether you configure it manually or through a panel, a properly configured reverse proxy is the foundation of a production-ready web architecture.
Share: