Reverse Proxy and Load Balancing with Nginx
A reverse proxy sits between clients and backend servers, handling TLS termination, load distribution, caching, and connection management. Nginx excels at this role thanks to its event-driven architecture.
Upstream Block
Define a group of backend servers:
upstream app_backend {
server 10.0.1.10:8000;
server 10.0.1.11:8000;
server 10.0.1.12:8000;
}
Then proxy to it:
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Load Balancing Algorithms
Round Robin (default)
Requests are distributed evenly across all servers:
upstream app_backend {
server 10.0.1.10:8000;
server 10.0.1.11:8000;
server 10.0.1.12:8000;
}
Least Connections
Send to the server with the fewest active connections:
upstream app_backend {
least_conn;
server 10.0.1.10:8000;
server 10.0.1.11:8000;
server 10.0.1.12:8000;
}
IP Hash
Sticky sessions -- the same client IP always goes to the same backend:
upstream app_backend {
ip_hash;
server 10.0.1.10:8000;
server 10.0.1.11:8000;
server 10.0.1.12:8000;
}
Weighted Distribution
Give more traffic to more powerful servers:
upstream app_backend {
server 10.0.1.10:8000 weight=5;
server 10.0.1.11:8000 weight=3;
server 10.0.1.12:8000 weight=1;
}
Health Checks
Nginx passive health checks mark a server as unavailable after repeated failures:
upstream app_backend {
server 10.0.1.10:8000 max_fails=3 fail_timeout=30s;
server 10.0.1.11:8000 max_fails=3 fail_timeout=30s;
server 10.0.1.12:8000 backup; # used only when others are down
}
max_fails=3-- mark the server down after 3 consecutive failures.fail_timeout=30s-- the server is considered unavailable for 30 seconds, then Nginx retries it.
Nginx Plus (commercial) offers active health checks that probe backends independently of client traffic.
SSL Termination
Terminate TLS at the proxy and forward plain HTTP to backends:
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
proxy_pass http://app_backend; # plain HTTP to backends
proxy_set_header X-Forwarded-Proto https;
}
}
Backend applications should check the X-Forwarded-Proto header rather
than the connection protocol to determine if the original request was
secure.
Proxy Headers
Always forward essential headers so backends know about the original request:
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
WebSocket Proxying
WebSocket connections require the Upgrade and Connection headers:
location /ws/ {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
# Increase timeouts for long-lived connections
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
Without these headers, Nginx treats the connection as standard HTTP and closes it prematurely.
Buffering and Timeouts
Tune proxy behavior for your workload:
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 16k;
proxy_busy_buffers_size 32k;
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
Disable buffering for streaming or server-sent events:
location /events {
proxy_pass http://app_backend;
proxy_buffering off;
proxy_cache off;
}
Return to the Web Servers hub or continue to Performance Tuning and Nginx Guide.