Stop Apache Choking: High-Performance Nginx Reverse Proxy Guide
It happens every time. You optimize your MySQL queries, you strip down your PHP code, and you finally launch your project. Then, a link hits a major site—maybe Digg, maybe a front-page feature on VG.no here in Norway—and your server load spikes to 50. Apache starts spawning child processes until your RAM evaporates, swap thrashing begins, and your site goes dark. The problem isn't your code; it's your architecture. In 2010, relying solely on Apache's process-based model for high-concurrency static content is architectural suicide. The solution isn't throwing money at more RAM; it's changing how you handle connections. This guide covers the precise configuration to place Nginx 0.8 in front of Apache on CentOS 5, a setup that allows you to handle the C10k problem without breaking the bank on hardware.
The Architecture: Why Nginx Wins on the Edge
To understand why we are doing this, you have to look at the process model. Apache (pre-fork) spawns a thread or process for every connection. If you have a 15MB Apache process and 500 concurrent users (many just on slow connections downloading images), you need 7.5GB of RAM just to keep the lights on. That is unsustainable for most VPS setups. Nginx works differently. It uses an event-driven, asynchronous architecture (specifically the epoll event notification mechanism on Linux 2.6 kernels). It handles thousands of connections in a single process with a tiny memory footprint. By placing Nginx on port 80 to serve static files (images, CSS, JS) and buffering slow client connections, we only pass the heavy dynamic requests to Apache on port 8080. This is the "Reverse Proxy" pattern, and it is the single most effective optimization you can perform on a Linux server today.
Pro Tip: Don't compile from source unless you absolutely need specific modules. For CentOS 5, use the EPEL repository or the CentALT repo to get Nginx 0.8.x. The default repositories are often hopelessly outdated. Speed matters, but stability on a production node matters more.
Step 1: Preparing the Backend (Apache)
First, we need to move Apache off port 80. Nginx needs to own the public face of your server. Edit your Apache configuration, usually found in /etc/httpd/conf/httpd.conf on RedHat/CentOS systems. You need to change the Listen directive and ensure your VirtualHosts are updated. If you skip this, Nginx will fail to start with a "Address already in use" error. Remember, we are not replacing Apache; we are protecting it. Apache is still excellent at handling PHP via mod_php, which is currently more stable than the early iterations of PHP-FPM for many legacy applications.
# /etc/httpd/conf/httpd.conf
# Change listening port
Listen 127.0.0.1:8080
# Ensure KeepAlive is OFF for the backend to save RAM
# Nginx handles the KeepAlive to the client
KeepAlive Off
Step 2: The Nginx Configuration Strategy
Now, let's configure the beast. The goal is to maximize throughput. We aren't just proxying; we are optimizing buffer sizes so Nginx holds the request until it's ready, keeping the heavy Apache process free until the last millisecond. This configuration assumes a standard CoolVDS instance with 4 CPU cores. If you have fewer, adjust worker_processes accordingly. Note the use of proxy_buffers—this is critical. If your buffers are too small, Nginx writes to the disk, which kills your I/O performance. On a high-performance RAID-10 SAS setup, disk is fast, but RAM is always faster.
# /etc/nginx/nginx.conf
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Optimization for sending files
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Gzip Compression to save bandwidth
gzip on;
gzip_disable "msie6";
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
}
Step 3: The Virtual Host (The Proxy Pass)
This is where the magic happens. We create a server block that intercepts traffic. We explicitly tell Nginx to serve static files directly from the disk (bypassing Apache entirely) and forward everything else to port 8080. We must also forward the X-Real-IP header; otherwise, your Apache logs will show all traffic coming from 127.0.0.1, making your analytics useless and security audits impossible.
# /etc/nginx/conf.d/mysite.com.conf
server {
listen 80;
server_name mysite.com www.mysite.com;
# Serve Static Content Directly
location ~* ^/(images|css|js|uploads)/ {
root /var/www/html/mysite;
expires 30d;
access_log off;
}
# Pass Dynamic Content to Apache
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
# Headers to pass real client IP to backend
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Buffer settings to handle heavy payloads
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Infrastructure Matters: The "CoolVDS" Factor
Software optimization can only take you so far. If your underlying host is overselling CPUs or putting you on a congested network, your epoll settings won't save you. In a shared environment (OpenVZ), "noisy neighbors" can steal your CPU cycles, causing jitter even if your Nginx config is perfect. This is why professional deployments in the Nordic region demand Xen virtualization. With Xen, you get dedicated resources and true hardware isolation.
Furthermore, latency is the silent killer of user experience. If your target audience is in Oslo or the greater European region, hosting on servers physically located near the NIX (Norwegian Internet Exchange) reduces the Round Trip Time (RTT) significantly compared to budget hosting in the US. At CoolVDS, we utilize enterprise-grade 15k RPM SAS RAID-10 arrays and pure Xen virtualization. We don't believe in "burst" RAM; you get what you pay for, allocated 100% of the time. This stability is crucial when you are fine-tuning timeouts in milliseconds.
Legal Considerations in 2010
For Norwegian businesses, adhering to the Personopplysningsloven (Personal Data Act) is mandatory. Hosting your data within national borders simplifies compliance significantly compared to navigating the complex Safe Harbor agreements required when hosting in the US. By keeping your Nginx proxy and backend database on local soil, you satisfy the Datatilsynet requirements while gaining the latency advantage.
Final verification
Before you restart services, test your configuration. A syntax error in `nginx.conf` will bring down your site.
# Check config syntax
service nginx configtest
# If OK, restart services
service httpd restart
service nginx start
You have now effectively quadrupled your server's capacity for concurrent connections. The days of Apache crashing because 300 people decided to visit your site at once are over. However, as your traffic scales, you will eventually hit the I/O bottleneck of rotating platters. When that day comes, you'll need the raw throughput of enterprise storage arrays found in our high-performance tier.
Ready to benchmark the difference? Deploy a CoolVDS Xen instance today and see how low your load average can go.