乐闻世界logo
搜索文章和话题

What is Nginx's event-driven model? How does it achieve high concurrency?

2月21日 16:45

What is Nginx's event-driven model? How does it achieve high concurrency?

Nginx uses an event-driven, non-blocking I/O model, which is the core reason it can handle high-concurrency connections. Understanding Nginx's event-driven model is crucial for optimizing performance and solving high-concurrency problems.

Event-Driven Model Principles:

Nginx uses an event-driven architecture, handling I/O operations through event notification mechanisms rather than traditional multi-process or multi-threaded models.

Core Concepts:

  1. Event Loop: Main loop that monitors and processes various events
  2. Event Handlers: Functions that handle specific types of events
  3. Non-blocking I/O: I/O operations don't block the process
  4. Asynchronous Processing: Handle I/O completion events through callback functions

Workflow:

shell
1. Master process starts, listens on ports 2. Fork multiple Worker processes 3. Each Worker process runs its own event loop independently 4. Event loop monitors connection, read/write events 5. When events trigger, call corresponding handlers 6. After processing, continue monitoring

Nginx Process Model:

nginx
# nginx.conf configuration worker_processes auto; # Automatically set worker process count, usually equals CPU cores worker_rlimit_nofile 65535; # File descriptor limit per worker events { worker_connections 10240; # Maximum connections per worker use epoll; # Use epoll event model (Linux) multi_accept on; # Allow accepting multiple connections simultaneously }

Theoretical Concurrent Connections:

shell
Max concurrent connections = worker_processes × worker_connections Example: 4 workers, each with 10240 connections = 40960 concurrent connections

Event Model Types:

Linux - epoll:

nginx
events { use epoll; }
  • Efficiently handles large numbers of connections
  • O(1) time complexity
  • Supports edge-triggered and level-triggered modes

BSD/macOS - kqueue:

nginx
events { use kqueue; }

Windows - select/poll:

nginx
events { use select; }

High Concurrency Optimization Configuration:

nginx
# Global configuration user nginx; worker_processes auto; worker_rlimit_nofile 100000; events { worker_connections 65535; use epoll; multi_accept on; accept_mutex off; # Disable mutex lock for better concurrency performance } http { # Connection optimization keepalive_timeout 65; keepalive_requests 100; # Buffer optimization client_body_buffer_size 128k; client_max_body_size 10m; client_header_buffer_size 1k; large_client_header_buffers 4 4k; # Output buffering output_buffers 1 32k; postpone_output 1460; # File descriptors open_file_cache max=100000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; }

Performance Tuning Parameters:

  1. worker_processes: Set to auto or number of CPU cores
  2. worker_connections: Adjust based on memory and business requirements
  3. worker_rlimit_nofile: Set sufficiently large file descriptor limit
  4. multi_accept: Allow accepting multiple new connections simultaneously
  5. accept_mutex: Disable for high concurrency to reduce lock contention

System-Level Optimization:

bash
# /etc/sysctl.conf # Increase system file descriptor limit fs.file-max = 1000000 # Optimize TCP parameters net.ipv4.tcp_max_tw_buckets = 6000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 65536 4194304 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_fin_timeout = 30 net.ipv4.tcp_keepalive_time = 1200 net.ipv4.tcp_tw_reuse = 1

Comparison with Apache:

FeatureNginxApache
ModelEvent-drivenProcess/Thread
Memory UsageLowHigh
ConcurrencyHighMedium
CPU UsageLowHigh
Dynamic ProcessingWeakStrong

Monitoring and Diagnostics:

nginx
# Enable status monitoring location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; }

Status information includes:

  • Active connections: Current active connection count
  • accepts: Total accepted connections
  • handled: Total handled connections
  • requests: Total handled requests
  • Reading: Connections reading request headers
  • Writing: Connections sending responses
  • Waiting: Idle connections

Practical Use Cases:

  1. High-concurrency web servers: Handle tens of thousands of concurrent connections
  2. Reverse proxy: Act as entry gateway
  3. Load balancing: Distribute requests to backends
  4. Static resource serving: Efficiently serve static files

Performance Testing:

Use wrk or ab for stress testing:

bash
# Test with wrk wrk -t12 -c4000 -d30s http://example.com/ # Test with ab ab -n 10000 -c 1000 http://example.com/

Common Problem Solutions:

  1. Insufficient connections: Increase worker_connections and system file descriptor limit
  2. High CPU usage: Check worker_processes setting, avoid too many
  3. Insufficient memory: Optimize buffer sizes, reduce worker_connections
  4. Performance bottleneck: Use epoll, enable multi_accept, disable accept_mutex
标签:Nginx