Console Login
Home / Blog / Server Administration / Nginx as a Reverse Proxy: Stop Letting Apache Kill Your Server Load
Server Administration 14 views

Nginx as a Reverse Proxy: Stop Letting Apache Kill Your Server Load

@

Your Apache Server Is Choking. Here Is How To Fix It.

I looked at a client's top output yesterday. It was a bloodbath. Load average was hitting 45.0 on a quad-core box. The culprit? Hundreds of Apache processes, each consuming 30MB of RAM, all fighting to serve a 4KB style.css file. This is madness.

If you are still letting Apache handle static files in 2012, you are doing it wrong. The prefork model cannot handle the concurrent connections required by modern web apps. The solution is not throwing more RAM at the problem. The solution is architecture.

Enter Nginx. By placing it in front of your heavy backend (Apache/mod_php, Python, or Ruby), you offload the concurrency management to an event-driven engine designed to handle 10,000 connections without breaking a sweat. Here is how we configure this at CoolVDS for our high-traffic clients in Norway.

The Architecture: Nginx + Proxy Pass

The goal is simple: Nginx sits on port 80. It serves images, CSS, and JS directly from the disk (or memory). It only forwards requests to the backend (running on port 8080 or a Unix socket) when dynamic processing is actually needed.

Here is a battle-tested configuration snippet for /etc/nginx/nginx.conf suitable for CentOS 6 or Debian Squeeze:

server {
    listen 80;
    server_name example.no;
    root /var/www/vhosts/example.no/httpdocs;

    # Serve static files directly. No Apache overhead.
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        access_log        off;
        log_not_found     off;
        expires           30d;
    }

    # Pass everything else to the backend
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # Timeouts are critical for preventing hung processes
        proxy_connect_timeout 60;
        proxy_send_timeout 90;
        proxy_read_timeout 90;
    }
}

Storage Latency: The Silent Killer

You can tune Nginx all day, but if your disk I/O is saturated, your site will hang. This is where most generic VPS providers fail. They put forty customers on a single SATA RAID array. When one neighbor runs a backup, your database locks up.

In high-performance hosting, spindle drives are dead. At CoolVDS, we have moved aggressively to Pure SSD storage arrays. The random read/write speeds of Solid State Drives are essential for databases and high-traffic logging. While others are talking about SAS 15k drives, we are seeing I/O wait times drop to near zero with SSDs.

Pro Tip: Check your disk latency with iostat -x 1. If your %util is constantly near 100% while CPU is idle, you are I/O bound. Move to an SSD-backed VPS immediately.

The Norwegian Advantage: Latency and Law

If your target audience is in Oslo, Bergen, or Trondheim, why is your server in Texas? Physics is undefeated. The round-trip time (RTT) from Oslo to Dallas is ~140ms. From Oslo to a server peered at NIX (Norwegian Internet Exchange)? Less than 10ms.

Furthermore, we have to talk about compliance. With the Datatilsynet (Data Inspectorate) enforcing strict interpretations of the Personal Data Act (Personopplysningsloven), you need to know exactly where your logs are stored. Hosting outside the EEA brings legal headaches regarding Safe Harbor. Keeping your data on Norwegian soil, or at least within the EEA, simplifies your compliance strategy significantly.

Comparison: Apache Solo vs. Nginx Reverse Proxy

FeatureApache SoloNginx Proxy + Apache
Static File ServingSlow (process blocking)Instant (event-driven)
Memory UsageHigh (30MB+ per connection)Low (2MB overhead)
Max Concurrent Users~250-50010,000+
Storage BackendUsually HDDCoolVDS SSD

Conclusion

Stop apologizing for downtime. The combination of Nginx for concurrency and CoolVDS's SSD infrastructure for I/O throughput is the standard for 2012. You get faster page loads, better SEO rankings, and you stay on the right side of Norwegian privacy laws.

Ready to fix your load average? Don't let slow I/O kill your project. Deploy a test instance on CoolVDS today and see the difference pure SSD storage makes.

/// TAGS

/// RELATED POSTS

Surviving the Spike: High-Performance E-commerce Hosting Architecture for 2012

Is your Magento store ready for the holiday rush? We break down the Nginx, Varnish, and SSD tuning s...

Read More →

Automate or Die: Bulletproof Remote Backups with Rsync on CentOS 6

RAID is not a backup. Don't let a typo destroy your database. Learn how to set up automated, increme...

Read More →

Apache vs Lighttpd in 2012: Squeezing Performance from Your Norway VPS

Is Apache's memory bloat killing your server? We benchmark the industry standard against the lightwe...

Read More →

Stop Guessing: Precision Server Monitoring with Munin & Nagios on CentOS 6

Is your server going down at 3 AM? Stop reactive fire-fighting. We detail the exact Nagios and Munin...

Read More →

The Sysadmin’s Guide to Bulletproof Automated Backups (2012 Edition)

RAID 10 is not a backup strategy. In this guide, we cover scripting rsync, rotating MySQL dumps, and...

Read More →

Paranoid Security: Hardening Your Linux VPS Against 2011's Threat Landscape

It's 2011 and LulzSec is on the loose. Default configurations are a death sentence. Here is the batt...

Read More →
← Back to All Posts