What are common Nginx deployment architectures? How to choose the right architecture?
Nginx can adopt various deployment architectures based on different business requirements and scales, from single-machine deployment to distributed clusters.
Single-Machine Deployment Architecture:
shellClient → Nginx → Application Server → Database
Use Cases:
- Small websites or applications
- Development and testing environments
- Low-traffic business
Configuration Example:
nginxserver { listen 80; server_name example.com; root /var/www/html; index index.php index.html; location ~ \.php$ { fastcgi_pass unix:/var/run/php/php8.0-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location / { try_files $uri $uri/ =404; } }
Reverse Proxy Architecture:
shellClient → Nginx (Reverse Proxy) → Backend Server Cluster
Use Cases:
- Multiple application servers need unified entry
- Need load balancing
- Need to hide backend servers
Configuration Example:
nginxupstream backend { server 192.168.1.100:8080 weight=3; server 192.168.1.101:8080 weight=2; server 192.168.1.102:8080 weight=1; keepalive 32; } server { listen 80; server_name example.com; location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
Load Balancing Architecture:
shellClient → Nginx (Load Balancer) → Backend Server Pool
Use Cases:
- High concurrent access
- Need horizontal scaling
- Need high availability
Load Balancing Strategies:
nginx# Round-robin (default) upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; } # Least connections upstream backend { least_conn; server 192.168.1.100:8080; server 192.168.1.101:8080; } # IP hash upstream backend { ip_hash; server 192.168.1.100:8080; server 192.168.1.101:8080; } # Weighted round-robin upstream backend { server 192.168.1.100:8080 weight=3; server 192.168.1.101:8080 weight=2; server 192.168.1.102:8080 weight=1; }
Multi-Layer Proxy Architecture:
shellClient → Edge Nginx → Middle Nginx → Application Server
Use Cases:
- Large-scale distributed systems
- Need multi-layer caching
- Need security isolation
Configuration Example:
nginx# Edge Nginx upstream middle_layer { server 192.168.1.200:80; server 192.168.1.201:80; } server { listen 80; server_name example.com; location / { proxy_pass http://middle_layer; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } } # Middle layer Nginx upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; } server { listen 80; server_name middle.example.com; location / { proxy_pass http://backend; proxy_set_header Host $host; } }
CDN Integration Architecture:
shellClient → CDN → Origin Nginx → Backend Servers
Use Cases:
- Global user access
- Need to accelerate static resources
- Need to reduce origin server pressure
Configuration Example:
nginxserver { listen 80; server_name example.com; # Redirect static resources to CDN location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2)$ { return 301 https://cdn.example.com$request_uri; } # Dynamic content location / { proxy_pass http://backend; proxy_set_header Host $host; } }
High Availability Architecture (Keepalived + Nginx):
shellClient → VIP (Virtual IP) ↓ Nginx Master Node ← Keepalived ↓ Nginx Backup Node ← Keepalived ↓ Backend Servers
Use Cases:
- Need high availability
- Cannot accept single point of failure
- Critical business systems
Configuration Example:
nginx# Master node Nginx configuration upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; } server { listen 80; server_name example.com; location / { proxy_pass http://backend; proxy_set_header Host $host; } } # Keepalived configuration vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1234 } virtual_ipaddress { 192.168.1.50 } }
Microservices Architecture:
shellClient → Nginx (API Gateway) → Microservices Cluster
Use Cases:
- Microservices architecture
- Need unified API entry
- Need service discovery
Configuration Example:
nginx# User service upstream user_service { server 192.168.1.100:8080; server 192.168.1.101:8080; } # Order service upstream order_service { server 192.168.1.200:8080; server 192.168.1.201:8080; } # Payment service upstream payment_service { server 192.168.1.300:8080; server 192.168.1.301:8080; } server { listen 80; server_name api.example.com; # Route to user service location /api/users/ { proxy_pass http://user_service; proxy_set_header Host $host; } # Route to order service location /api/orders/ { proxy_pass http://order_service; proxy_set_header Host $host; } # Route to payment service location /api/payments/ { proxy_pass http://payment_service; proxy_set_header Host $host; } }
Caching Architecture:
shellClient → Nginx (Cache Layer) → Backend Servers
Use Cases:
- Read-heavy, write-light applications
- Need to reduce backend pressure
- Need to improve response speed
Configuration Example:
nginx# Define cache path proxy_cache_path /var/cache/nginx/proxy levels=1:2 keys_zone=proxy_cache:10m max_size=1g inactive=60m; upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; } server { listen 80; server_name example.com; location / { # Enable caching proxy_cache proxy_cache; proxy_cache_valid 200 10m; proxy_cache_valid 404 1m; proxy_cache_key "$scheme$request_method$host$request_uri"; # Cache bypass conditions proxy_cache_bypass $http_cache_control; proxy_no_cache $http_cache_control; proxy_pass http://backend; proxy_set_header Host $host; # Add cache status header add_header X-Cache-Status $upstream_cache_status; } }
Hybrid Architecture:
shellClient → Nginx (Static Resources) → CDN ↓ Nginx (Dynamic Content) → Application Server
Use Cases:
- Complex business systems
- Need to separate static and dynamic content
- Need multiple optimization strategies
Configuration Example:
nginx# Static resource server server { listen 80; server_name static.example.com; root /var/www/static; location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2)$ { expires 1y; add_header Cache-Control "public, immutable"; access_log off; } } # Dynamic content server upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; } server { listen 80; server_name example.com; location / { proxy_pass http://backend; proxy_set_header Host $host; } }
Architecture Selection Guide:
| Business Requirements | Recommended Architecture | Description |
|---|---|---|
| Small website | Single-machine deployment | Simple and easy to maintain |
| Medium application | Reverse proxy | Unified entry, load balancing |
| High concurrency | Load balancing architecture | Horizontal scaling, high availability |
| Global business | CDN integration architecture | Accelerate access, reduce origin pressure |
| Critical business | High availability architecture | Avoid single point of failure |
| Microservices | Microservices architecture | Unified API gateway |
| Read-heavy, write-light | Caching architecture | Improve performance, reduce backend pressure |
Deployment Architecture Best Practices:
- Progressive scaling: Start with simple architecture, expand gradually based on needs
- Monitoring and alerting: Real-time monitoring of architecture status, timely issue detection
- Disaster recovery backup: Configure backup and disaster recovery solutions
- Security protection: Add security measures at each layer of the architecture
- Performance optimization: Choose appropriate optimization strategies based on business characteristics
- Documentation: Detailed recording of architecture design and configuration
- Regular drills: Regular fault drills to verify architecture reliability
- Cost control: Control costs while meeting requirements