Stop Letting Apache Kill Your RAM: High-Performance Nginx Reverse Proxy Guide
I still see it every day. A perfectly good dual-core VPS brought to its knees because some developer decided to serve static assets through Apache's heavy mod_php workers. It is 2012, folks. We know better. If you are serving .jpg or .css files with a process that consumes 40MB of RAM per thread, you are essentially setting money on fire.
In the Norwegian hosting market, where bandwidth is pristine but latency to the continent can still bite you if you aren't careful, efficiency is the only metric that matters. I recently migrated a heavy traffic news portal in Oslo from a pure Apache setup to an Nginx reverse proxy architecture. The result? Load average dropped from 8.0 to 0.4. Here is exactly how we did it, and how you can stop waking up at 3 AM to restart httpd.
The Architecture: Why Nginx Wins
Apache is a beast. It is fantastic for dynamic content, .htaccess flexibility, and compatibility. But it uses a thread-per-connection (or process-per-connection) model. When a user on a slow mobile 3G connection in Tromsø downloads a 2MB image, that Apache thread stays open for the entire duration, eating RAM.
Nginx is event-driven. It can handle thousands of concurrent connections with a tiny memory footprint. By placing Nginx in front (port 80) and Apache behind it (port 8080), Nginx handles the slow clients and static files, while Apache only deals with the heavy lifting of PHP generation. This is the Buffering advantage.
Step 1: The Foundation (CentOS 6 / Debian 6)
First, we need the Nginx repositories. If you are on CentOS 6 (standard for enterprise), don't use the default outdated yum repos. Add the official source.
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=1
Install it. Don't think about compiling from source unless you need specific modules like http_stub_status_module that aren't enabled by default (though they usually are now).
yum install nginx
chkconfig nginx on
Step 2: Configuring the Reverse Proxy
We are going to tell Nginx to listen on port 80. Then, we define a proxy_pass to send dynamic requests to Apache. The magic happens in the proxy_params.
Edit /etc/nginx/conf.d/default.conf (or your specific vhost):
server {
listen 80;
server_name example.no www.example.no;
# 1. Serve Static Files Directly (Skip Apache)
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
root /var/www/html;
expires 30d;
access_log off;
}
# 2. Pass Everything Else to Apache
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
# Essential Headers for Real IP propagation
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Buffer Settings (Critical for preventing disk I/O thrashing)
proxy_client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
}
}
Pro Tip: If you don't forward theX-Real-IP, your Apache logs will show all traffic coming from 127.0.0.1. This will break your analytics and security scripts like Fail2Ban. Make sure you installmod_rpafon the Apache side to translate these headers back to real IPs.
Step 3: Optimizing for Hardware
Software configuration is only half the battle. The underlying virtualization technology dictates your ceiling. In 2012, many providers are still pushing OpenVZ containers where resources are oversold. If your neighbor gets hit by a DDoS, your Nginx buffers starve, and you go down.
This is why at CoolVDS, we rely strictly on KVM (Kernel-based Virtual Machine). It provides true hardware isolation. When we allocate RAM to your instance, it is locked to your kernel. Furthermore, we use enterprise-grade SSDs. Nginx relies heavily on file descriptors and reading static assets from disk. On a standard 7200 RPM SATA drive, high concurrency creates an I/O wait bottleneck. On our SSD arrays, the seek time is virtually zero.
Tuning `nginx.conf` for SSD and Multi-Core
Open your main config at /etc/nginx/nginx.conf and adjust the worker processes to match your CPU cores. If you are on a CoolVDS 4-Core plan, set this explicitly.
user nginx;
worker_processes 4; # Match your CPU Cores
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll; # Essential for Linux 2.6+
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Performance Tweaks
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Gzip is CPU intensive but saves bandwidth
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
Compliance and Data Location
Operating in Norway brings specific responsibilities. The Data Inspectorate (Datatilsynet) is strict about where personal data resides. By hosting on a VPS physically located in Oslo, you satisfy the requirements of the Personal Data Act regarding jurisdiction. Furthermore, latency to NIX (Norwegian Internet Exchange) is often under 2ms from our data centers.
Don't route your Norwegian traffic through a datacenter in Frankfurt just to save fifty kroner. The latency penalty on SSL handshakes alone makes the site feel sluggish.
Final Thoughts
Nginx isn't just a trend; it is the necessary evolution of the web stack. By decoupling the serving of static assets from the processing of dynamic code, you stabilize your infrastructure.
But remember: a tuned engine needs a solid chassis. Running this stack on oversold, spinning-rust storage is a waste of time. You need consistent I/O performance.
Ready to drop your load average? Deploy a KVM instance on CoolVDS today. Our SSD-backed storage is ready for your high-concurrency workloads.