A reverse proxy sits between clients and your backend servers, forwarding requests and returning responses on behalf of the backend. Nginx excels in this role thanks to its event-driven architecture, minimal memory footprint, and ability to handle thousands of concurrent connections. Whether you are running a Node.js API, a Python Flask application, a Docker container, or a Java microservice, Nginx as a reverse proxy provides SSL termination, load balancing, caching, and a unified entry point for all your services.
This guide covers everything from basic proxy_pass configuration to advanced setups including WebSocket support, load balancing, and caching strategies.
How a Reverse Proxy Works
Without a reverse proxy, your backend application binds directly to port 80 or 443 and handles everything: SSL, static files, request parsing, and application logic. With Nginx in front, the responsibilities are separated. Nginx handles SSL termination, serves static files, manages connections, and forwards dynamic requests to your backend. Your application focuses solely on business logic.
Without Reverse Proxy
- Backend handles SSL certificates directly
- Backend serves static files (inefficient)
- One backend = one domain
- No connection buffering
- Need to run as root for port 80/443
With Nginx Reverse Proxy
- Nginx terminates SSL (backend sees plain HTTP)
- Nginx serves static files natively (fast)
- Multiple backends behind one IP
- Connection buffering protects backends
- Backend runs on unprivileged port
Basic Reverse Proxy Configuration
The simplest reverse proxy configuration forwards all requests to a backend application running on localhost.
listen 80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
This forwards all traffic for app.example.com to a backend running on port 3000. But the proxy headers are what make this configuration production-ready. Without them, your backend would see every request coming from 127.0.0.1 instead of the actual client IP.
Essential Proxy Headers
Proxy headers ensure your backend application receives accurate information about the original client request. Without these, logging, rate limiting, and geolocation all break.
| Header | Purpose | Nginx Variable |
|---|---|---|
Host | The original hostname the client requested | $host |
X-Real-IP | The actual client IP address | $remote_addr |
X-Forwarded-For | Chain of proxies the request passed through | $proxy_add_x_forwarded_for |
X-Forwarded-Proto | Original protocol (http or https) | $scheme |
X-Forwarded-Host | Original Host header before proxying | $host |
X-Forwarded-Port | Original port the client connected to | $server_port |
Never blindly trust
X-Forwarded-For or X-Real-IP headers from untrusted sources. A malicious client can send fake headers to bypass IP-based restrictions. In your backend application, configure the trusted proxy list (in Express.js: app.set('trust proxy', 'loopback'); in Laravel: TrustProxies middleware). Only trust headers from known proxy IPs.
Reverse Proxy with SSL Termination
SSL termination means Nginx handles the HTTPS encryption/decryption, and communicates with the backend over plain HTTP. This offloads the cryptographic work from your application and centralizes certificate management.
listen 80;
server_name app.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
# Security headers
add_header Strict-Transport-Security "max-age=63072000" always;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
When Nginx terminates SSL and forwards the request over HTTP, your backend sees the request as plain HTTP. Without
X-Forwarded-Proto: https, your application may generate HTTP links instead of HTTPS links, cause redirect loops, or flag the connection as insecure. Most frameworks (Express, Django, Laravel, Rails) check this header to determine the original protocol.
WebSocket Support
WebSocket connections start as HTTP and then "upgrade" to a persistent bidirectional connection. Nginx needs specific configuration to handle this upgrade properly, because by default it does not forward the Upgrade and Connection headers.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name app.example.com;
# ... SSL config ...
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Increase timeouts for long-lived WebSocket connections
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
1.
proxy_http_version 1.1 — WebSocket requires HTTP/1.1 (not 1.0, which is Nginx's default for proxied connections).2.
Upgrade and Connection headers — these trigger the protocol switch from HTTP to WebSocket.3. Extended timeouts — default Nginx proxy timeout is 60 seconds. WebSocket connections are long-lived and will be killed without extended timeouts.
Load Balancing with Upstream
When your application runs on multiple backend servers (or multiple instances), Nginx can distribute traffic across them using an upstream block.
# Load balancing method (default: round-robin)
least_conn; # Send to server with fewest active connections
server 127.0.0.1:3001 weight=3; # Gets 3x more traffic
server 127.0.0.1:3002 weight=1;
server 127.0.0.1:3003 weight=1;
server 127.0.0.1:3004 backup; # Only used if others are down
# Health check parameters
keepalive 32; # Maintain persistent connections to backends
}
server {
listen 443 ssl http2;
server_name api.example.com;
location / {
proxy_pass http://backend_cluster;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
| Method | Behavior | Best For |
|---|---|---|
round-robin (default) | Rotates through servers sequentially | Equal-capacity servers |
least_conn | Sends to server with fewest active connections | Varying request duration |
ip_hash | Same client IP always goes to same server | Session persistence |
hash | Custom hash key for routing decisions | Cache optimization |
random two least_conn | Picks two random servers, sends to the least busy | Large clusters |
Proxy Caching
Nginx can cache responses from your backend, dramatically reducing load and improving response times for frequently requested content.
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m
max_size=1g inactive=60m use_temp_path=off;
server {
listen 443 ssl http2;
server_name api.example.com;
location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_cache api_cache;
proxy_cache_valid 200 10m; # Cache 200 responses for 10 min
proxy_cache_valid 404 1m; # Cache 404 responses for 1 min
proxy_cache_use_stale error timeout updating;
proxy_cache_lock on; # Prevent thundering herd
add_header X-Cache-Status $upstream_cache_status;
}
# Skip cache for authenticated requests
location /api/user/ {
proxy_pass http://127.0.0.1:3000;
proxy_cache off;
}
}
Adding
$upstream_cache_status as a response header lets you debug caching behavior. Possible values: HIT (served from cache), MISS (fetched from backend), EXPIRED (cache entry expired), BYPASS (cache was skipped), UPDATING (stale content served while updating). Check this header in your browser DevTools to verify caching is working.
Rate Limiting
Protect your backend from abuse and DDoS attacks with Nginx rate limiting.
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;
server {
# General API: 10 requests/second per IP with burst
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://127.0.0.1:3000;
}
# Login endpoint: 1 request/second per IP (strict)
location /api/auth/login {
limit_req zone=login_limit burst=5;
proxy_pass http://127.0.0.1:3000;
}
}
Proxying to Docker Containers
Docker containers typically expose services on high-numbered ports. Nginx makes it seamless to route traffic to containers using domain names.
server {
listen 443 ssl http2;
server_name myapp.example.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Multiple containers on different ports
server {
listen 443 ssl http2;
server_name grafana.example.com;
location / { proxy_pass http://127.0.0.1:3000; ... }
}
server {
listen 443 ssl http2;
server_name gitlab.example.com;
location / { proxy_pass http://127.0.0.1:8929; ... }
}
Common Mistakes and Fixes
| Mistake | Symptom | Fix |
|---|---|---|
| Missing trailing slash in proxy_pass | Path duplication (e.g., /api/api/users) | Match the trailing slash in location and proxy_pass consistently |
| No proxy headers | Backend logs show 127.0.0.1 for all requests | Add X-Real-IP and X-Forwarded-For headers |
| Default proxy timeout (60s) | Long requests get 504 Gateway Timeout | Increase proxy_read_timeout and proxy_send_timeout |
| HTTP/1.0 to backend | WebSocket connections fail | Set proxy_http_version 1.1 |
| Large request body rejected | 413 Request Entity Too Large | Increase client_max_body_size |
| Buffering too small | Slow response for large payloads | Tune proxy_buffer_size and proxy_buffers |
proxy_pass http://127.0.0.1:3000; (no trailing slash) forwards the full original URI. proxy_pass http://127.0.0.1:3000/; (with trailing slash) strips the matched location prefix. For location /api/, a request to /api/users becomes /users with the trailing slash, or stays /api/users without. Choose based on how your backend expects paths.
Testing and Verification
$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# Reload without downtime
$ sudo nginx -s reload
# Test the proxy with curl
$ curl -I https://app.example.com
HTTP/2 200
server: nginx
x-powered-by: Express
x-cache-status: MISS
# Verify WebSocket upgrade
$ curl -i -N \
-H "Connection: Upgrade" \
-H "Upgrade: websocket" \
-H "Sec-WebSocket-Version: 13" \
-H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" \
https://app.example.com/ws
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Automatic Reverse Proxy with Panelica
Writing Nginx reverse proxy configurations by hand is error-prone, especially when managing multiple backend applications with different requirements. Panelica automates this entire process through its domain management interface.
When you set a domain's web server mode to "Reverse Proxy" in Panelica, it automatically generates the complete Nginx configuration — including proxy headers, WebSocket support, SSL termination with automatic Let's Encrypt certificates, and security headers. Point any domain to a backend port and Panelica handles the rest. This is particularly useful for proxying to Docker containers or Node.js applications running on custom ports.
Nginx as a reverse proxy is one of the most versatile tools in your infrastructure. It provides SSL termination, load balancing, WebSocket support, caching, and rate limiting — all with minimal resource overhead. Master the essential headers (
Host, X-Real-IP, X-Forwarded-For, X-Forwarded-Proto), understand the trailing slash behavior, and always test with nginx -t before reloading. Whether you configure it manually or through a panel, a properly configured reverse proxy is the foundation of a production-ready web architecture.