Stop Letting Apache MaxClients Kill Your Server
It is 3:00 AM. You receive a frantic call. Your client's e-commerce site—running on a bloated Magento installation or a heavy vBulletin forum—has hit the front page of a major news outlet or Digg. The server is unresponsive. You SSH in, run top, and see the horror: Apache is spawning hundreds of child processes, each consuming 40MB of RAM, until the machine hits swap and dies a slow, painful death.
This is the classic C10k problem. And in late 2009, throwing more hardware at it is not the answer. Architecture is.
If you are still serving static assets (images, CSS, JS) through Apache's heavy prefork MPM, you are doing it wrong. The solution is placing Nginx in front of Apache. Nginx handles the connections and static files with an event-driven architecture, while Apache sits safely behind it, doing what it does best: processing heavy PHP scripts.
The Architecture: Nginx + mod_rpaf
In this setup, Nginx listens on port 80. It serves all static content directly from disk (using the sendfile syscall). For dynamic requests (PHP), it proxies the traffic to Apache, which we move to port 8080.
Why does this matter for your hosting choice? Because Nginx is lightweight, but it requires stable I/O to be effective. At CoolVDS, we utilize high-performance RAID-10 SAS arrays and Xen virtualization. Unlike OpenVZ, Xen guarantees your RAM is actually yours. When Nginx needs buffers, they are there. No "failcnt" errors in /proc/user_beancounters.
1. Install Nginx (The EPEL Way)
Don't compile from source unless you need specific modules. On CentOS 5, use the EPEL repository to get a stable 0.7.x release.
# rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm
# yum install nginx
2. Configure Nginx as a Reverse Proxy
Edit /etc/nginx/nginx.conf. We need to define the proxy parameters to ensure headers are passed correctly. If you don't do this, Apache will think all traffic is coming from 127.0.0.1, breaking your log analysis and IP-based security.
server {
listen 80;
server_name example.no www.example.no;
# Serve static files directly
location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ {
root /var/www/html;
expires 30d;
}
# Pass everything else to Apache
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
3. Reconfigure Apache
You must change Apache's Listen port. Edit /etc/httpd/conf/httpd.conf:
#Listen 80
Listen 127.0.0.1:8080
Crucial Step: Install mod_rpaf. Without this Apache module, your access logs will show the proxy IP (127.0.0.1) for every visitor, rendering stats useless and effectively blinding any IP-blocking scripts you have.
The Latency Factor: Why Location Matters
You can optimize your config files all day, but you cannot defeat the speed of light. If your target audience is in Oslo, Bergen, or Trondheim, hosting your server in Texas is negligence.
Pro Tip: Data privacy is tightening. With the Personopplysningsloven (Personal Data Act) enforced by Datatilsynet, keeping customer data within Norwegian borders is not just about millisecond latency—it is about compliance and trust.
When you ping a server hosted on the US West Coast from Oslo, you are looking at 140ms+. Hosted on CoolVDS in our Oslo datacenter, peered directly at NIX (Norwegian Internet Exchange), that drops to <10ms. For a PHP application making multiple database queries per page load, that latency compounds.
War Story: The " VG " Spike
Last month, we migrated a client running a busy community portal from a generic shared host to a CoolVDS Xen VPS. They were crashing daily at peak hours (19:00 - 21:00). Their Apache MaxClients was set to 150, consuming 4GB of RAM. The server was swapping hard.
We didn't add RAM. We implemented the Nginx proxy above.
The result: RAM usage dropped to 600MB. The CPU load average went from 15.0 to 0.8. The site handled a traffic spike from a link on a major Norwegian tabloid without a single dropped connection.
Final Thoughts
Nginx 0.7 is stable, production-ready, and arguably necessary for the modern web. Don't wait for your server to crash during the Christmas rush. Evaluate your architecture now.
If you need a sandbox to test this configuration, spin up a CoolVDS instance. We offer pure Xen virtualization and enterprise-grade hardware that doesn't choke when you start tuning your buffers.