Scaling Past the C10k Barrier: High-Performance Nginx Reverse Proxy Guide
If you are still serving static images directly through Apache's prefork MPM, you are essentially burning RAM for fun. It is 2010. We have moved past the era where throwing more hardware at a problem is the only solution. The C10k problem—handling ten thousand concurrent connections—is real, and if your site hits the front page of Digg or Slashdot tomorrow, a standard LAMP stack will crumble.
I recently audited a high-traffic forum hosted in Oslo. They were running a standard CentOS 5 setup with Apache handling everything. During peak hours, top showed the load average climbing to 20+. The server wasn't CPU bound; it was swapping to death because every tiny request for a .gif or .css file spawned a heavy Apache child process consuming 20MB of memory. The fix wasn't buying a bigger server. The fix was Nginx.
The Architecture: Nginx in Front, Apache Behind
We are not getting rid of Apache. It is still the king of compatibility for PHP and .htaccess rules. Instead, we place Nginx on port 80 as a reverse proxy. It handles the heavy lifting of connection management and static files, while forwarding only dynamic PHP requests to Apache running on port 8080.
This setup works because Nginx uses an event-driven, asynchronous architecture (using the epoll system call on Linux), unlike Apache's blocking process-based model.
Step 1: Installing Nginx on RHEL/CentOS 5
Nginx isn't in the default Yum repositories yet. You will need to add the EPEL repo or compile from source. For production, I prefer compiling to strip out unused modules and enable the stub_status module for monitoring.
wget http://nginx.org/download/nginx-0.8.34.tar.gz
tar -zxvf nginx-0.8.34.tar.gz
cd nginx-0.8.34
./configure --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --with-http_stub_status_module
make && make install
Step 2: Configuration for Reverse Proxy
Edit /etc/nginx/nginx.conf. We need to define an upstream block for our backend (Apache) and configure the proxy headers so Apache knows the real IP address of the visitor, not the local proxy IP.
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
# Define the backend
upstream backend_apache {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name example.com www.example.com;
# Serve static files directly - bypassing Apache completely
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
root /var/www/html;
expires 30d;
}
# Pass everything else to Apache
location / {
proxy_pass http://backend_apache;
proxy_redirect off;
# Critical for logs and PHP scripts to see the real user IP
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
}
Note: On the Apache side, make sure you install mod_rpaf so that Apache logs the correct IP addresses from the X-Forwarded-For header.
Hardware Matters: Virtualization and I/O
Software optimization can only go so far. If your VPS provider is overselling their nodes using OpenVZ, you will suffer from "noisy neighbor" syndrome. When another user on the node runs a backup script, your I/O wait times spike, and your Nginx buffers fill up, causing lag regardless of your config.
Pro Tip: Always check your disk I/O latency. In a reverse proxy setup, you are writing logs and temporary buffer files constantly. If
iostatshows high await times, move providers.
This is why for mission-critical deployments in the Nordic region, I rely on CoolVDS. Unlike budget hosts that jam hundreds of containers onto a single drive, CoolVDS uses Xen virtualization with strict resource isolation. Their storage backend is built on high-speed SAS RAID-10 arrays, which is absolute overkill for static files but necessary when your database starts growing.
Data Privacy and Latency
For those of us operating in Norway, latency to the NIX (Norwegian Internet Exchange) is a metric we watch closely. Hosting your proxy in a US datacenter while your customers are in Oslo adds 100ms+ of unnecessary latency. Furthermore, with the Personopplysningsloven (Personal Data Act) strictly enforced by Datatilsynet, keeping your server logs and customer data physically within Norwegian borders is the safest legal strategy.
CoolVDS operates out of premium datacenters in Oslo, ensuring your ping times are in the single digits for local users and your data governance is compliant with local regulations.
Summary
By shifting static file serving to Nginx, you can often drop your RAM usage by 40-60%. Apache is left doing what it does best: processing dynamic PHP code. This setup extends the life of your current hardware and prepares you for traffic spikes.
Don't wait for your server to crash during a traffic surge. Reconfigure your stack today, and ensure your underlying infrastructure can handle the I/O. If you need a rock-solid foundation for this setup, spin up a CoolVDS Xen instance and see the stability difference for yourself.