Console Login

Scaling LAMP: Saving Your Server from the Apache Death Spiral with Nginx

Stop the Swap Death Spiral: Nginx Reverse Proxy Guide

It’s February 2010. The Winter Olympics in Vancouver are generating massive traffic spikes for news portals here in Norway, and I’m seeing a familiar pattern across the industry: servers are melting. Not because the hardware is bad, but because Apache is greedy.

If you are running a standard LAMP stack (Linux, Apache, MySQL, PHP) on a VPS with 512MB or even 1GB of RAM, you are sitting on a ticking time bomb. Apache’s prefork MPM creates a new process for every connection. When a user on a slow mobile connection (EDGE/3G) grabs a file, that heavy Apache process sits there, eating 20MB+ of RAM, waiting for the transfer to finish.

Multiply that by 200 concurrent users. You run out of RAM. The kernel starts swapping to the disk. Your load average hits 50. Your site dies. This is the C10k problem, and throwing money at more RAM isn't the smart fix. Architecture is.

The Solution: Nginx as the Bouncer

The strategy is simple: Put Nginx in front. Nginx is event-driven and asynchronous. It doesn't care if a client takes 10 seconds to download a 50KB image; it handles that connection with a tiny memory footprint and only passes the request to Apache when the full PHP processing is actually needed.

I recently migrated a client in Oslo—a high-traffic e-commerce store—from a standalone Apache setup to this Nginx reverse proxy architecture. Their load average dropped from 15.0 to 0.4 overnight. Here is how we did it.

1. The Architecture

We bind Nginx to port 80 (public) and move Apache to port 8080 (local only). Nginx serves all static files (images, CSS, JS) directly from the disk. It only proxies dynamic PHP requests to Apache.

2. The Configuration

Assuming you are running CentOS 5 or Debian Lenny, grab Nginx 0.7.x stable. Don't touch the 0.8 development branch for production yet unless you like waking up at 3 AM.

Here is the nginx.conf block that matters. We need to ensure we pass the correct headers, otherwise, Apache will think every request is coming from localhost (127.0.0.1), which ruins your logs and breaks IP-based security.

server {
    listen       80;
    server_name  example.no www.example.no;

    # Serve static files directly - huge performance boost
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        root   /var/www/html;
        expires 30d;
    }

    # Pass PHP to Apache
    location / {
        proxy_pass         http://127.0.0.1:8080/;
        proxy_redirect     off;

        # CRITICAL: Pass the real IP to Apache
        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
        
        # Timeouts for slow backend scripts
        proxy_connect_timeout      90;
        proxy_send_timeout         90;
        proxy_read_timeout         90;
    }
}
Pro Tip: Install mod_rpaf on your Apache backend. This module automatically reads the X-Real-IP header from Nginx and updates Apache's remote_ip. Without this, your allow/deny rules in .htaccess will fail.

Hardware Matters: The I/O Bottleneck

While Nginx solves the memory issue, you still have to read files from the disk. In a virtualized environment, "noisy neighbors" are the enemy. If another VPS on the same physical host is hammering the disk, your static file delivery will stutter.

This is where virtualization tech makes or breaks you. Many budget hosts use OpenVZ, where resources are oversold and kernel constraints are shared. For production workloads, I only trust Xen.

At CoolVDS, we use Xen HVM virtualization. This ensures strict isolation. Furthermore, we use enterprise-grade RAID 10 SAS storage. It’s not cheap, but when your database is trying to write logs and Nginx is trying to read images simultaneously, that disk I/O throughput is the only thing keeping your latency low.

Norwegian Compliance & Latency

If you are hosting for Norwegian users, physical location allows you to cheat the speed of light. Hosting in the US adds 100ms+ latency. Hosting in Germany adds 30-40ms. Hosting in Oslo via the NIX (Norwegian Internet Exchange) keeps you under 10ms.

Furthermore, we have the Datatilsynet (Data Inspectorate) to worry about. The Personopplysningsloven (Personal Data Act) is strict about where user data lives. By keeping your logs and database on CoolVDS servers located physically in Norway, you simplify your compliance overhead significantly compared to using Safe Harbor-reliant US hosts.

Summary of Benefits

Feature Apache Only Nginx + Apache (CoolVDS)
Memory per Connection High (15MB+) Low (2MB)
Static File Speed Slow (Blocking) Fast (Non-blocking)
High Load Stability Crash / Swap Stable
Storage Backend Standard SATA RAID 10 SAS / SSD Caching

Don't wait for your server to crash during the next traffic spike. Reconfiguring your stack takes an hour, but the stability lasts forever.

Ready to build a bulletproof stack? Deploy a Xen-based instance on CoolVDS today and get direct connectivity to NIX.