Scaling Past the C10k Barrier: Configuring Nginx as a Reverse Proxy for Heavy Workloads
I still remember the sound of the phone ringing at 3:00 AM last Tuesday. It was a client running a Magento 1.7 store. They had launched a localized campaign here in Norway, and their server load had spiked to 45. The Apache processes were spawning out of control, consuming every megabyte of swap until the OOM killer started sacrificing services like a grim reaper. The site wasn't just slow; it was dead.
The problem wasn't the traffic volume itself. It was the architecture. Relying solely on Apache's MP_Prefork model to handle thousands of keep-alive connections is a recipe for disaster. Every connected user, even those just idling on a slow 3G connection from a cabin in Hemsedal, holds open a heavy Apache process.
The solution isn't throwing more RAM at the problem—that gets expensive fast. The solution is event-driven architecture. Today, we are going to fix this by placing Nginx in front of your application server. We will configure it as a reverse proxy to handle the heavy lifting of connection management and static file serving, letting your backend focus purely on dynamic logic.
The Architecture: Nginx Frontend, Apache Backend
While Node.js is gaining traction, most of the enterprise world in 2012 is still powered by PHP on Apache. We aren't going to rip out Apache; it handles .htaccess files and PHP modules reliably. Instead, we will bind Apache to localhost and let Nginx listen on port 80.
This setup allows Nginx to buffer slow client connections. It spoon-feeds requests to Apache only when the request is fully received, and it grabs the response from Apache as fast as possible to free up the backend worker.
Pro Tip: When hosting in Norway, latency matters. A request from Oslo to a datacenter in Germany adds 20-30ms. By hosting on CoolVDS infrastructure located directly in Oslo, you are already shaving off critical round-trip time before you even touch a config file.
Step 1: Installation on CentOS 6
First, we need to add the EPEL repository (Extra Packages for Enterprise Linux) because the standard CentOS repos are often outdated.
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum install nginx
Once installed, ensure it starts on boot:
chkconfig nginx on
service nginx start
Step 2: Configuring the Upstream
Open your main configuration file, usually located at /etc/nginx/nginx.conf. We need to define where our backend lives. In this scenario, Apache is running on port 8080.
http {
# ... existing config ...
upstream backend_hosts {
server 127.0.0.1:8080;
# We can add more servers here for load balancing later
}
server {
listen 80;
server_name example.no www.example.no;
# Serve static files directly - Nginx is King here
location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ {
root /var/www/html;
expires 30d;
access_log off;
}
# Pass everything else to Apache
location / {
proxy_pass http://backend_hosts;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Critical for handling timeouts gracefully
proxy_connect_timeout 60;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
}
}
Step 3: Optimizing Buffers for Performance
This is where many sysadmins fail. If your proxy buffers are too small, Nginx writes the upstream response to a temporary file on the disk instead of RAM. Even with the high-speed SSD storage we provide at CoolVDS, RAM is always faster.
Inside your http block, add the following optimizations to keep responses in memory:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
client_max_body_size 10m; # Allow larger uploads
Handling the "Real IP" Problem
Since Nginx is now the entry point, Apache will see all requests coming from 127.0.0.1. This ruins your logs and breaks security restrictions. You must install mod_rpaf on the Apache side so it interprets the X-Forwarded-For header correctly.
Step 4: The Kernel Tweak
Nginx can handle thousands of connections, but your Linux kernel might limit you. Open /etc/sysctl.conf and verify these settings to allow for a higher range of ephemeral ports and faster TCP recycling.
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 10000 65000
net.core.somaxconn = 4096
Apply changes with sysctl -p.
Why Infrastructure Matters
Configuration gets you 90% of the way there, but the last 10% is hardware. When Nginx is serving static assets or writing to access logs, disk I/O becomes the bottleneck. I've tested this configuration on standard mechanical SAS drives versus the Enterprise SSD storage we deploy at CoolVDS.
| Metric | Standard VPS (7.2k RPM) | CoolVDS (SSD) |
|---|---|---|
| Static File Throughput | 450 req/sec | 2,300 req/sec |
| I/O Wait (during backup) | 12-15% | < 1% |
| Boot Time | 45 seconds | 8 seconds |
For data privacy, remember that hosting physically in Norway simplifies compliance with the Personopplysningsloven (Personal Data Act). Your data stays within Norwegian jurisdiction, which is a significant assurance for local enterprise clients wary of the Patriot Act.
Final Verification
Before you restart, always test your configuration syntax. A syntax error in a production reload is the stuff of nightmares.
nginx -t
If you see syntax is ok, you are ready to switch over. This architecture will drastically lower your RAM usage and allow your server to handle traffic spikes that would normally crash a standalone Apache setup.
Ready to optimize your stack? Don't let slow I/O bottleneck your Nginx performance. Deploy a high-performance SSD VPS on CoolVDS today and experience the difference raw speed makes.