Console Login

Scaling Past C10k: High-Performance Nginx Reverse Proxy Configuration Guide

The Apache Bottleneck: Why Your RAM is Vanishing

If you are still serving static assets through Apache with mod_php loaded, you are actively setting money on fire. I watched a client’s server hit the swap partition last week simply because a Digg effect spike caused Apache to spawn 200 child processes. Each one was consuming 30MB of RAM just to serve a 4KB JPEG file. That is architectural suicide.

We are in 2011. The C10k problem (handling 10,000 concurrent connections) isn't theoretical anymore; it's the baseline for any serious application. While Apache is excellent for dynamic content processing, its thread/process-based model is too heavy for the front line. The solution isn't buying more RAM—it's changing your edge architecture.

Enter Nginx (Engine-X). Unlike Apache, Nginx uses an asynchronous, event-driven architecture. It doesn't spawn a process for every connection. It handles thousands of connections in a single worker process with a constant memory footprint. By placing Nginx in front of your heavy backend (Apache/mod_php, Python, or Ruby on Rails), you create a shield that serves static content instantly and buffers slow client connections.

The Architecture: The "Valkyrie" Setup

In the Nordic hosting market, we often deal with high-traffic media sites. The setup I recommend to every CTO in Oslo is what I call the Valkyrie configuration:

  • Front-end (Port 80): Nginx 1.0.4 (Stable). Handles SSL, static files, and GZIP compression.
  • Back-end (Port 8080 or Unix Socket): Apache or PHP-FPM. Handles the heavy lifting of business logic.

This setup allows CoolVDS instances to punch way above their weight class. While our KVM-based virtualization provides dedicated resources, software efficiency determines if you can handle 500 or 5,000 users per second.

Pro Tip: When hosting in Norway, latency to the NIX (Norwegian Internet Exchange) is critical. A lean Nginx proxy can serve a cached hit in under 2ms to a local user if your datacenter peering is solid. This is why location matters as much as CPU cycles.

Step 1: Installation and Basic Configuration

Assuming you are on a standard CentOS 5 or Ubuntu 10.04 (Lucid Lynx) LTS system. Do not rely on the default repositories; they often carry outdated versions like 0.7.x. Add the stable PPA or compile from source if you need specific modules.

sudo add-apt-repository ppa:nginx/stable sudo apt-get update sudo apt-get install nginx

Once installed, we strip down the default configuration. We want raw speed. Open /etc/nginx/nginx.conf.

Step 2: The Reverse Proxy Config

Here is the war-tested configuration I use for high-load production environments. This goes into your site's server block (e.g., /etc/nginx/sites-available/default).

server {
    listen 80;
    server_name example.no www.example.no;

    # 1. Serving Static Content Directly
    # This bypasses the backend entirely for assets.
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        access_log        off;
        log_not_found     off;
        expires           30d;
        root              /var/www/public_html;
    }

    # 2. Proxying Dynamic Requests to Backend
    location / {
        proxy_pass        http://127.0.0.1:8080;
        proxy_redirect    off;

        # 3. Header Forwarding (CRITICAL)
        # Without this, your backend logs will only see 127.0.0.1 as the visitor IP.
        proxy_set_header  Host             $host;
        proxy_set_header  X-Real-IP        $remote_addr;
        proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;

        # 4. Buffer Optimization
        # Prevents Nginx from buffering the response to disk if it fits in RAM.
        client_max_body_size       10m;
        client_body_buffer_size    128k;
        proxy_connect_timeout      90;
        proxy_send_timeout         90;
        proxy_read_timeout         90;
        proxy_buffers              32 4k;
    }
}

Step 3: Handling the "Slow Client" Attack

One of the hidden benefits of using Nginx on a CoolVDS server is protection against Slowloris attacks. Apache keeps a worker open while a client sends data byte-by-byte. Nginx buffers the entire request before sending it to the backend. This means your expensive Apache threads are only engaged when the request is fully ready to be processed.

To tune this, ensure your worker_processes matches your CPU cores (check /proc/cpuinfo), and crank up the worker_connections.

worker_processes  2; # Set to number of CPU cores
events {
    worker_connections  1024;
    use epoll; # Essential for Linux 2.6+ kernels used on CoolVDS
}

Storage I/O: The Hidden Bottleneck

You can tune Nginx all day, but if your disk I/O is thrashing, your server will stall. In 2011, many VPS providers are still overselling slow SATA drives. When Nginx buffers a large request to a temporary file, write speed becomes the bottleneck.

This is where the underlying infrastructure matters. At CoolVDS, we utilize high-performance Enterprise RAID-10 arrays. Unlike OpenVZ containers where "noisy neighbors" can steal your I/O, our KVM virtualization ensures that your disk operations are isolated. If you are logging to disk or caching proxy responses, that seek time is the difference between a 200ms load time and a 2s load time.

Legal Compliance in Norway

A quick note for my Norwegian colleagues: Under the Personal Data Act (Personopplysningsloven), you are responsible for the security of user logs. When configuring Nginx logs (access.log), ensure they are rotated correctly and stored on a partition with strict permissions. If you are proxying traffic from outside the EEA, be mindful of where that data is physically stored. Hosting on CoolVDS servers located physically in Oslo simplifies this compliance headache significantly compared to using US-based cloud giants.

Final Thoughts

Switching to Nginx as a reverse proxy is the single most effective upgrade you can make for a LAMP stack in 2011. It lowers RAM usage, increases concurrency, and stabilizes your server under load. But software is only half the equation. You need hardware that doesn't lie to you about resources.

Don't let legacy infrastructure hold back your deployment. Spin up a CoolVDS KVM instance today—backed by solid SSD-cached storage and premium bandwidth—and see what your Nginx config can really do.