Stop Letting MaxClients Kill Your Server
It’s 3:00 AM. Your monitoring system is screaming. Your swap is thrashing because Apache just spawned its 150th child process, each one eating 40MB of RAM just to serve a 2KB favicon. If this sounds like your Tuesday night, you are doing it wrong.
The traditional LAMP stack is robust, but it is heavy. In 2010, hardware is getting faster, but the "Digg Effect" or a link from massive Norwegian news sites like VG.no can still melt a standard VPS instantly. The solution isn't just throwing more money at RAM; it's architectural. It's time to put Nginx in front of Apache.
The Architecture: Why Reverse Proxy?
Apache is fantastic at processing PHP. It is terrible at holding open connections for slow clients (Keep-Alive). Nginx, on the other hand, is an event-based beast. It uses a fraction of the memory to handle thousands of concurrent connections.
By setting up Nginx as a reverse proxy, Nginx handles the heavy lifting of client connections, serves static files (images, CSS, JS) instantly, and only passes the dynamic PHP requests to Apache on the backend. This is the setup we use to stabilize high-traffic sites on CoolVDS Xen instances.
The Setup
We assume you are running CentOS 5.5 or Ubuntu 10.04 Lucid Lynx. You'll keep Apache running but move it to port 8080.
1. Reconfigure Apache (httpd.conf/ports.conf)
Change the listen port:
Listen 127.0.0.1:8080
2. Install Nginx
If you are on CentOS, you might need the EPEL repository or build from source (recommended for the latest 0.8.x stable branch).
yum install nginx
The Configuration
Here is a battle-tested nginx.conf specifically tuned for a standard VPS environment. This handles the proxy pass logic.
server {
listen 80;
server_name example.no www.example.no;
# Serve static files directly - huge RAM saver
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires 30d;
root /var/www/html;
}
# Pass everything else to Apache
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Timeouts to prevent hanging connections
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
}
Pro Tip: Don't forget to install mod_rpaf on Apache. Without it, Apache will see all traffic coming from 127.0.0.1 (localhost) instead of the real user IP. This will break your geo-targeting and log analysis.
Latency and The Norwegian Context
Why does this matter for a server in Oslo? Latency. When a user connects from Tromsø or Bergen, the round-trip time (RTT) affects how snappy the site feels. TCP handshakes take time.
Nginx processes headers much faster than Apache. Furthermore, if you are hosting sensitive customer data, you need to be aware of the Personal Data Act (Personopplysningsloven). Keeping data within Norwegian borders isn't just about speed; it's about compliance with the Datatilsynet guidelines. Hosting on US-based clouds (like early AWS zones) can introduce legal grey areas regarding safe harbor.
| Feature | Apache Alone | Nginx Reverse Proxy |
|---|---|---|
| Memory Footprint | High (Prefork model) | Low (Event model) |
| Static Files | Slow, blocks threads | Instant, non-blocking |
| Concurrent Users | Struggles > 200 | Handles > 5,000 easily |
Hardware Matters: The CoolVDS Standard
Software optimization can only go so far. If your underlying storage is slow, your database will lock up regardless of your web server config. Many VPS providers oversell their nodes using OpenVZ, meaning your "guaranteed" RAM isn't actually there when your neighbor gets a traffic spike.
At CoolVDS, we rely on Xen virtualization. This offers true hardware isolation. We also utilize high-performance 15k RPM SAS RAID-10 arrays. While SSDs are starting to appear in consumer laptops, they aren't reliable enough for enterprise server write-cycles yet. Our SAS arrays provide the low-latency I/O needed for database-heavy workloads without the risk of early flash memory burnout.
Don't let I/O wait times kill your SEO rankings. Configure Nginx correctly, and ensure your foundation is solid.
Next Steps
Ready to harden your stack? Deploy a CentOS 5 instance on CoolVDS today. We offer direct peering at NIX (Norwegian Internet Exchange) for the lowest possible latency to your customers.