Nginx Reverse Proxy: The Only Way to Scale LAMP in 2012
If I see one more httpd process spawning a 40MB child just to serve a 2KB CSS file, I am going to pull the plug on the rack myself. It is inefficient, it is reckless, and quite frankly, it is the reason your server creates a load average of 20.0 the moment a marketing email goes out.
We all know the standard LAMP stack is reliable. Apache's mod_php is battle-tested. But Apache is a preforking beast. It creates a new process (or thread) for every connection. When you have 500 users on a slow 3G connection keeping slots open, Apache runs out of RAM, hits swap, and your site dies. The solution isn't "buy more RAM." The solution is architecture.
Enter Nginx. By placing Nginx in front of Apache as a reverse proxy, we handle thousands of concurrent connections with a tiny memory footprint, forwarding only the heavy PHP requests to Apache. This is how the big players do it, and it is how you should be doing it on your CoolVDS instances.
The Architecture: Nginx + Apache
Here is the logic: Nginx sits on port 80. It handles all the "boring" work—SSL handshakes, static files (images, JS, CSS), and gzip compression. It serves these instantly because it uses an asynchronous, event-driven architecture. It doesn't care if the client is on a slow connection.
When a request comes in for a PHP file, Nginx proxies it to Apache (running on port 8080 or localhost). Apache processes the logic, talks to MySQL, returns the HTML to Nginx, and goes back to sleep. You get the power of Apache .htaccess compatibility with the raw speed of Nginx.
Step 1: The Setup (CentOS 6 / Ubuntu 10.04)
First, we need to install Nginx. If you are on CentOS 6 (standard enterprise choice), you likely need the EPEL repository because standard repositories are always ancient.
# CentOS 6
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm
yum install nginx
# Ubuntu 10.04 / 11.10
apt-get update
apt-get install nginx
Once installed, do not start it yet. We need to move Apache off port 80. Edit your /etc/httpd/conf/httpd.conf (or ports.conf on Debian/Ubuntu) and change:
Listen 127.0.0.1:8080
Restart Apache. Now port 80 is free for the speed demon.
Step 2: Configuring Nginx as a Reverse Proxy
This is where most people get it wrong. They just forward everything. We want to be surgical. Open your text editor—I use vi because I prefer efficiency—and create a new virtual host configuration.
Below is a production-ready config. Note the proxy_set_header directives. Without these, Apache will think every request is coming from 127.0.0.1 and your access logs (and security plugins) will be useless.
server {
listen 80;
server_name www.your-domain.no;
root /var/www/html;
index index.php index.html;
# 1. Serve static files directly (Nginx is 10x faster at this)
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires 360d;
}
# 2. Deny access to hidden files (.htaccess, .git)
location ~ /\. {
access_log off;
log_not_found off;
deny all;
}
# 3. Pass PHP to Apache
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts are critical to prevent hung processes
proxy_connect_timeout 60;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
}
Pro Tip: On your Apache side, installmod_rpaf(Reverse Proxy Add Forward). This module reads theX-Real-IPheader from Nginx and automatically sets the remote IP in Apache. If you skip this, your fail2ban jail monitoring Apache logs will fail to ban the actual attackers.
Step 3: Performance Tuning (The "CoolVDS" Factor)
Configuration is software, but performance is physics. Nginx buffers responses from Apache to disk if they are too large to fit in memory. If your disk I/O is slow, your locking increases.
This is why hardware matters. Most VPS providers in Norway are still spinning 7.2k RPM SATA drives in crowded RAID arrays. When Nginx tries to write temporary buffer files during a traffic spike, the "iowait" spikes, and latency creates a bottleneck.
At CoolVDS, we utilize pure SSD storage arrays. In 2012, this is still a premium luxury for many, but for us, it is a standard requirement. The random I/O performance of SSDs means Nginx can flush buffers instantly. If you are serving a high-traffic Magento store or a vBulletin forum, the difference between SATA and SSD is the difference between a loaded page and a timeout.
Advanced: Proxy Caching
If you want to go nuclear on performance, turn on Nginx caching. This saves the output from Apache to a file and serves it to the next visitor without even waking up Apache.
# Add this to nginx.conf inside http block
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m use_temp_path=off;
# Add this to your server block location /
proxy_cache my_cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
Be careful with this on dynamic sites. You don't want to cache a logged-in user's shopping cart. You'll need to add proxy_no_cache logic based on cookies.
Local Considerations: Latency and Law
For those of us hosting in the Nordics, latency to the NIX (Norwegian Internet Exchange) in Oslo is paramount. Keeping your server physically close to your user base reduces RTT (Round Trip Time). When using a Reverse Proxy, you are adding a tiny bit of processing overhead. You compensate for this by ensuring your VPS has a high-quality uplink.
Furthermore, under the Personal Data Act (Personopplysningsloven), you are responsible for the security of logs containing IP addresses. By centralizing your entry point at Nginx, you have a single point to secure. You can easily strip logs or enforce strict access controls using allow/deny directives directly in the Nginx config, satisfying Datatilsynet requirements for access control better than scattering .htaccess files everywhere.
Summary
Stop letting Apache manage client connections. It is bad at it. Let Nginx be the bouncer at the door, and let Apache do the cooking in the kitchen. This setup reduces RAM usage, increases concurrency, and stabilizes your server load.
However, software optimization can only take you so far. If your underlying storage is thrashing, your config doesn't matter. Don't build a Ferrari engine on a bicycle frame.
Ready to see how this config flies on real hardware? Spin up a CoolVDS SSD instance today. We offer the low latency and high IOPS your architecture deserves.