Nginx Reverse Proxy: Stop Apache From Eating Your RAM and Crashing Your VPS
Let’s be honest: the traditional LAMP stack is leaking money. If you are still serving static assets—images, CSS, JS—directly through Apache httpd using Prefork MPM, you are essentially setting money on fire. Every time a user requests a 4KB favicon, Apache spawns a heavy process, consumes 20MB+ of RAM, and locks a thread. Do this a hundred times a second, and your server starts swapping. Once you hit swap, it’s game over.
I saw this happen last week with a client running a Magento store targeting the Norwegian market. They had a perfectly decent dedicated server, but during a small marketing push, the load average spiked to 40. The CPUs weren't calculating physics; they were just context-switching Apache processes waiting for I/O. The solution wasn't throwing more hardware at it—it was architecture.
Enter Nginx. By placing Nginx 1.2.1 as a reverse proxy in front of Apache (or replacing Apache entirely with PHP-FPM), we can handle thousands of concurrent connections with a memory footprint that would barely register on a graph. This guide details exactly how to configure this on a CentOS 6 or Ubuntu 12.04 LTS system to maximize throughput.
The Architecture: The "Waiters and Cooks" Analogy
Think of Apache as a highly skilled chef (the Cook) who is great at processing complex PHP logic but terrible at walking out to the table to hand someone a glass of water. Nginx is the fleet-footed Waiter.
In a reverse proxy setup:
- Nginx (Port 80): Handles all incoming connections. It serves static files (jpg, css, html) instantly from memory or disk cache. It deals with the "Slowloris" attacks and network latency.
- Apache (Port 8080): Only receives requests for dynamic content (PHP, Python). It does the heavy lifting, generates the page, and hands it back to Nginx.
This offloading means Apache processes are only alive for the milliseconds it takes to parse PHP, rather than the seconds it takes a mobile client on a 3G network to download the response.
Step 1: Installing Nginx (The Right Way)
Don't just yum install nginx from the default repositories; they are often outdated. We want the stable branch 1.2.x. On CentOS 6, add the official repository:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=1
Then install:
yum install nginx
chkconfig nginx on
Step 2: Configuring the Proxy Pass
We need to tell Nginx to listen on port 80 and forward specific traffic to Apache running on 127.0.0.1:8080. Here is a production-ready server block. Note the use of proxy_buffer directives—these are critical. Without them, Nginx attempts to synchronously write data to the client, negating the benefit of the proxy.
server {
listen 80;
server_name example.com www.example.com;
root /var/www/vhosts/example.com/httpdocs;
index index.php index.html;
# 1. Serve Static Assets Directly (Bypassing Apache)
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
expires 30d;
access_log off;
log_not_found off;
add_header Pragma public;
add_header Cache-Control "public";
}
# 2. Pass PHP to Apache
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Performance Tuning for Buffers
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
}
}
The "Real IP" Problem
When you switch this on, Apache will see all traffic coming from 127.0.0.1. This breaks your access logs and security plugins. You must install mod_rpaf on the Apache side to translate the X-Real-IP header back to the actual client IP.
Step 3: Gzip Compression (Free Bandwidth)
Bandwidth is expensive, but CPU cycles on modern Virtual Dedicated Servers (VDS) are relatively cheap. Enabling Gzip compression in nginx.conf reduces payload size by 70% for text-based assets.
http {
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
Pro Tip: Don't set gzip_comp_level to 9. The CPU cost to compress that last 1% isn't worth it. Level 6 is the sweet spot between latency and compression ratio.
Why Infrastructure Matters: The CoolVDS Difference
Configuration is half the battle. The other half is the metal underneath. You can optimize Nginx all day, but if your host is putting you on a crowded node with spinning rust (HDD) and high I/O wait, you will still lag.
At CoolVDS, we don't oversell. We use KVM virtualization, which means your RAM is your RAM. Unlike OpenVZ containers where a neighbor can steal your resources, our KVM instances provide true kernel isolation.
Storage I/O: SSD vs 15k SAS
Most Norwegian hosts are still running 15k RPM SAS drives in RAID 10. That was fine in 2010. Today, we are deploying pure SSD arrays. Look at the difference in random read operations:
| Drive Type | IOPS (Random Read) | Latency |
|---|---|---|
| Standard SAS (15k RPM) | ~180 - 200 | 5-10ms |
| CoolVDS SSD | 50,000+ | < 0.1ms |
When Nginx is serving static files from the disk cache, that IOPS difference is what makes your site load instantly versus "fast enough."
Local Latency and Data Privacy
For those of us operating in Norway, latency to the NIX (Norwegian Internet Exchange) in Oslo is paramount. Hosting your servers in Germany or the US adds 30-100ms of round-trip time. CoolVDS infrastructure is peered directly in Oslo.
Furthermore, with the Datatilsynet (Data Inspectorate) tightening enforcement on the Personal Data Act (Personopplysningsloven), knowing exactly where your data physically resides is becoming a legal necessity, not just a preference. Keeping your data within Norwegian borders simplifies compliance significantly.
Summary
Implementing Nginx as a reverse proxy is the single most effective upgrade you can make to a LAMP stack server in 2012. It separates the concerns of connection handling from application logic, allowing your server to breathe under load.
Ready to test this setup? Don't risk your production environment. Spin up a KVM instance on CoolVDS today. With our SSD storage and low-latency network, you'll see what your code is actually capable of.