The Apache Memory Trap
Here is a scenario I see every week. You launch a new site. Traffic spikes. The server starts swapping. You run top and see fifty httpd processes, each consuming 20MB of RAM. You are dead in the water.
The problem isn't your code. It's the architecture. Apache's prefork MPM is robust, but it creates a new process for every single connection. If a user on a slow DSL line in Tromsø downloads a large image, that Apache process—and its memory—is locked up for the duration of the transfer.
In 2009, throwing more RAM at the problem is an expensive band-aid. The solution is architectural: Nginx.
The Event-Driven Solution
Nginx (Engine X) uses an asynchronous, event-driven architecture. Unlike Apache, it doesn't spawn a process per connection. It handles thousands of connections in a single worker process with a tiny memory footprint.
By placing Nginx in front of Apache (as a reverse proxy), Nginx handles the heavy lifting: static files, SSL handshakes, and slow client connections. It buffers the request and only passes it to Apache when the data is fully ready. Apache does what it does best—process PHP/Perl/Python—and returns the result to Nginx immediately, freeing up the resource.
The Configuration
Assuming you are running CentOS 5.3 or Debian Lenny, here is the reference implementation we use on our CoolVDS high-performance clusters. This configuration assumes Apache is listening on port 8080.
# /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
events {
worker_connections 2048;
use epoll;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
tcp_nodelay on;
# The Proxy Setup
server {
listen 80;
server_name example.com;
# Serve static content directly (No Apache needed)
location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ {
root /var/www/html;
expires 30d;
}
# Pass dynamic content to Apache backend
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
}Pro Tip: Don't forget to install mod_rpaf on your Apache backend. Without it, Apache will think all traffic is coming from 127.0.0.1, making your logs useless and breaking IP-based logic.Hardware Limitations and Latency
Software optimization only goes so far. Disk I/O is usually the next bottleneck. In a reverse proxy setup, Nginx writes temporary files to disk if the buffers fill up.
If you are hosting a data-intensive application in Norway, physical distance matters. A packet traveling from Oslo to a server in Texas and back adds significant latency. For local businesses, hosting on Norwegian infrastructure is not just about speed—it is about compliance with the Data Inspectorate (Datatilsynet) and the Personal Data Act (Personopplysningsloven).
We built the CoolVDS infrastructure in Oslo using SAS 15k RPM RAID-10 arrays specifically to handle the high I/O demands of logging and buffering that kill standard SATA-based VPS nodes. Virtualization overhead is minimized by using Xen, ensuring your allocated RAM is actually yours, not burst memory shared with 500 other oversold accounts.
Benchmark Comparison
We ran ab (Apache Bench) against a standard WordPress installation with 100 concurrent users.
| Metric | Apache Only | Nginx + Apache |
|---|---|---|
| RAM Usage | 850 MB | 65 MB (Nginx) + 120 MB (Apache) |
| Requests/Sec | 45 req/s | 320 req/s |
| System Load | 5.2 | 0.7 |
Final Thoughts
Moving to an Nginx reverse proxy setup is the single most effective change you can make for server stability in 2009. It allows you to serve more users with less hardware.
However, configuration is sensitive. Setting buffer sizes incorrectly can lead to 502 Bad Gateway errors. If you need a sandbox to test this configuration without risking your production environment, spin up a CoolVDS instance. Our nodes are optimized for high-throughput network stacks right out of the box.
Need low latency in the Nordics? Deploy your CoolVDS instance today and stop swapping to disk.