The Apache Monolith is Killing Your I/O
It is 2011. We need to stop pretending that spawning a 25MB Apache process just to serve a 2KB favicon is an acceptable use of resources. If you are running a high-traffic site targeting users in Oslo or Bergen, relying solely on the traditional LAMP stack is a recipe for server thrashing.
I recently audited a Magento installation for a client in Stavanger. Their quad-core server was hitting a load average of 20.0 during peak hours. The culprit? Hundreds of Apache workers tied up serving static images to slow mobile clients (3G connections are notoriously high-latency). The kernel was swapping memory to disk so aggressively the hard drives sounded like a coffee grinder.
The solution wasn't adding more RAM. It was putting Nginx in front of Apache.
Why Nginx 1.0 Changes the Game
Nginx finally hit version 1.0.0 earlier this month (April 12, 2011). It is production-ready. Unlike Apache's thread/process-per-connection model, Nginx uses an asynchronous, event-driven architecture. It can handle 10,000 concurrent connections with a trivial memory footprint.
In a CoolVDS environment, where we provide strictly dedicated resources via KVM or high-grade OpenVZ, efficient resource usage translates directly to speed. You don't need a larger plan; you need a smarter architecture.
The Architecture: Nginx + Apache Proxy
We aren't ditching Apache entirely (yet). .htaccess compatibility is still vital for many PHP applications. Instead, we use Nginx as the Reverse Proxy.
- Nginx (Port 80): Handles incoming connections, serves static files (jpg, css, js) directly from the disk/cache.
- Apache (Port 8080): Only handles dynamic requests (PHP, Python) forwarded by Nginx.
This setup, often called "Nginx termination," shields heavy Apache processes from the "Slowloris" effect of slow clients.
Configuration Strategy
Here is the reference configuration we use on CentOS 5.6 and Ubuntu 10.04 LTS instances at CoolVDS. This assumes you have compiled Nginx from source or used the EPEL repositories.
1. The Proxy Setup
Open /etc/nginx/nginx.conf. We need to ensure headers are passed correctly so Apache logs the real visitor IP, not 127.0.0.1.
server {
listen 80;
server_name example.no www.example.no;
# Serve static files directly - NO Apache involvement
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires 30d;
root /var/www/vhosts/example.no/httpdocs;
}
# Pass dynamic content to Apache
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Crucial for preventing timeouts on complex PHP scripts
proxy_connect_timeout 60;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
}
2. Install mod_rpaf on Apache
On the backend, Apache needs to know it's behind a proxy. Install mod_rpaf (Reverse Proxy Add Forward). Without this, your access logs will be useless, and IP-based blocking won't work.
# On Debian/Ubuntu
apt-get install libapache2-mod-rpaf
# Configuration inside apache2.conf
RPAFenable On
RPAFsethostname On
RPAFproxy_ips 127.0.0.1
Pro Tip: If you are serving users across Norway, latency matters. Our data center connects directly to the NIX (Norwegian Internet Exchange). Using Nginx's gzip_static on; directive ensures you aren't wasting CPU cycles compressing the same CSS file every request. Pre-compress your assets.
Tuning Buffers for High Load
Default Nginx settings are too conservative for modern traffic spikes. If you see "upstream sent too big header" errors in your error log, your buffers are too small.
Add this to your http block:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
This ensures that Nginx can buffer the full response from Apache before sending it to the client. This frees up the Apache worker almost instantly, allowing it to handle the next request, while Nginx handles the slow network transmission to the user.
The Hardware Reality: Why SSDs Matter
Nginx is incredibly fast, but it is still bound by disk I/O when serving static assets or writing logs. Most VPS providers in Europe still run on 7.2k RPM SATA drives. When traffic spikes, the disk queue creates a bottleneck.
At CoolVDS, we have begun deploying Enterprise SSD storage arrays. The difference in random read performance is not 20%—it is 2000%. For a reverse proxy cache, mechanical seek times are the enemy. If you are serious about performance, ensure your hosting platform isn't running on antiquated spinning rust.
Compliance Note: Datatilsynet
Operating in Norway requires strict adherence to the Personal Data Act. When configuring Nginx logs (`access_log`), be mindful of what you store. If you are logging IP addresses, you are processing personal data. Ensure your server security is tight—Nginx allows you to hide version numbers (`server_tokens off;`) to make reconnaissance harder for attackers.
Final Thoughts
Apache is a great application server, but a terrible frontend for the modern web. By offloading connection handling to Nginx 1.0, you effectively double the capacity of your server without spending an extra Krone on hardware upgrades.
However, software tuning can only fix so much. If your underlying infrastructure suffers from "noisy neighbors" or high latency to the Nordic backbone, your config tweaks are wasted. Check your ping times. If they aren't single digits to Oslo, it's time to move.
Ready to test Nginx 1.0? Deploy a high-performance VPS Norway instance on CoolVDS today and see the difference low-latency infrastructure makes.