Nginx Reverse Proxy: Crushing Latency and Optimizing Scale
Let’s be honest: if you are still serving static assets directly through Apache in 2012, you are doing it wrong. I recently watched a perfectly good Magento deployment in Oslo crumble because the SysAdmin insisted on using mod_php for everything. The moment a marketing email went out, the server’s RAM vanished, swapped to disk, and the load average spiked to 50. The site didn't just slow down; it died.
The solution wasn't to buy a bigger server. It was to fix the architecture. By placing Nginx in front of Apache, we dropped the memory footprint by 60% and stabilized the load. But software configuration is only half the battle; if your underlying virtualization is fighting for I/O on a crowded node, no amount of config tweaking will save you.
The Architecture: Why Nginx?
Apache is a beast. It’s powerful, but with the prefork MPM, every connection eats a dedicated thread and a chunk of RAM. Nginx uses an event-driven, asynchronous architecture. It can handle thousands of concurrent connections with a tiny memory footprint. In this setup, Nginx handles the heavy lifting—SSL termination, gzip compression, and serving static files (images, CSS, JS)—while Apache does what it does best: processing dynamic PHP or Python code.
Pro Tip: When hosting in Norway, latency matters. Routing traffic through Frankfurt to serve a customer in Bergen adds unnecessary milliseconds. Keep your data resident in Oslo. Not only does this reduce latency via the NIX (Norwegian Internet Exchange), but it also simplifies compliance with the Personopplysningsloven (Personal Data Act) by keeping data within national borders.
Step 1: Installation and Basic Prep
Assuming you are running CentOS 6 or Debian 6 (Squeeze), you’ll need the EPEL repositories or dotdeb. Don't rely on the default stale repositories if you want the features of Nginx 1.0.x.
# On CentOS 6
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm
yum install nginx
# On Debian 6
echo "deb http://packages.dotdeb.org squeeze all" >> /etc/apt/sources.list
wget http://www.dotdeb.org/dotdeb.gpg && cat dotdeb.gpg | apt-key add -
apt-get update && apt-get install nginx
Step 2: The "Battle-Hardened" Configuration
Out of the box, Nginx is conservative. We need to tune it for the modern web. Open /etc/nginx/nginx.conf. We are going to enable epoll, maximize open file descriptors, and tune the buffers.
user www-data;
worker_processes 4; # Set this to the number of CPU cores you have on your CoolVDS instance
pid /var/run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
# Buffer sizes - crucial for POST submissions
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logs
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Step 3: Configuring the Proxy Pass
Now, let's configure the virtual host to forward traffic to the backend. In this scenario, Apache is running on port 8080. We need to ensure that the real IP address of the visitor is passed to Apache, otherwise, your logs will show 127.0.0.1 for every visitor, making security auditing impossible.
Create a file at /etc/nginx/conf.d/proxy.conf to keep things clean:
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
Now, configure your site in /etc/nginx/sites-available/default:
server {
listen 80;
server_name example.no www.example.no;
# Serve static files directly
location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ {
root /var/www/html;
expires 30d;
access_log off;
}
# Pass everything else to Apache
location / {
proxy_pass http://127.0.0.1:8080;
include /etc/nginx/conf.d/proxy.conf;
}
}
Restart Nginx: service nginx restart.
The Hardware Bottleneck: Why SSDs Matter
You can optimize your software stack until you are blue in the face, but you cannot code your way out of poor I/O performance. In 2012, many VPS providers are still overselling spinning hard drives (HDD) on OpenVZ nodes. This creates the "noisy neighbor" effect. If another user on the node decides to compile a kernel, your database queries will hang.
This is where CoolVDS takes a different approach. We use KVM virtualization for true hardware isolation—your RAM is yours, and your CPU cycles are guaranteed. More importantly, we are one of the few providers in Europe deploying Pure SSD RAID-10 storage arrays. While standard SAS drives push 150-200 IOPS, our SSD arrays are pushing tens of thousands.
Comparison: Spinning Rust vs. CoolVDS SSD
| Metric | Standard HDD VPS | CoolVDS SSD VPS |
|---|---|---|
| Random Read IOPS | ~120 | ~25,000+ |
| Boot Time | 45 seconds | 8 seconds |
| MySQL Import (500MB) | 180 seconds | 22 seconds |
Security and ddos protection
Exposing a server to the web is like painting a target on your back. Nginx provides a first layer of defense. You can mitigate simple DDoS attacks using the limit_req_zone directive to throttle aggressive bots.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
...
server {
location /login.php {
limit_req zone=one burst=5;
}
}
}
However, for volumetric attacks, software firewalls (iptables) aren't enough. You need upstream filtering. CoolVDS includes hardware-level DDoS protection at our edge routers in Oslo, scrubbing bad traffic before it even hits your eth0 interface.
Conclusion
The days of monolithic Apache servers are ending. By decoupling static serving from dynamic processing, you gain stability and speed. But remember, the software is only as fast as the disk it runs on. Don't let slow I/O kill your SEO rankings or frustrate your Norwegian users.
Ready to see the difference SSDs make? Deploy a test instance on CoolVDS in under 55 seconds and benchmark it yourself.