Console Login

Scaling Past the C10k Problem: Nginx Reverse Proxy Guide for High-Traffic Norwegian Sites

Stop Letting Apache Kill Your Server's RAM

It is 2010. We have dual-core CPUs in our pockets and fiber rolling out across Oslo, yet I still see sysadmins configuring production servers like it's 2005. If you are running a high-traffic site—whether it's a Magento store or a busy vBulletin forum—and you are letting Apache handle every single incoming connection, you are doing it wrong.

The math is brutal. Apache's prefork MPM creates a new process for every connection. If you have a PHP script that takes 200ms to execute, and you have 500 concurrent users, Apache is going to try to spawn enough children to handle that. Each child consumes 20MB to 50MB of RAM. Do the math. Your swap file starts thrashing, I/O wait spikes, and your server becomes a brick.

The solution isn't "buy more RAM." The solution is architecture. We need to put Nginx in front.

The Architecture: Nginx + Apache

We aren't ditching Apache completely. .htaccess files and the vast ecosystem of Apache modules are still useful for the backend logic. But we represent the "Battle-Hardened" school of thought here: Let the specialist do the job.

Nginx (Engine-X) uses an event-driven, asynchronous architecture. It doesn't spawn a process per connection. It handles thousands of connections in a single worker process using extremely low memory. By placing Nginx on port 80 and Apache on port 8080, Nginx serves the static files (images, CSS, JS) instantly and only passes the heavy PHP requests to Apache.

Step 1: The Backend (Apache)

First, we move Apache off the front line. On your Debian or Ubuntu 10.04 LTS system, edit /etc/apache2/ports.conf:

NameVirtualHost *:8080
Listen 8080

You also need to install libapache2-mod-rpaf. Without this, Apache will think all traffic is coming from 127.0.0.1 (localhost). You need the real client IPs for your logs and security plugins.

Step 2: The Frontend (Nginx 0.7.x/0.8.x)

Install the stable version of Nginx. In the standard repositories, you might find 0.7.65, which is solid.

Here is the nginx.conf configuration that separates the pros from the amateurs. We define a proxy cache and handle the headers correctly.

server {
    listen 80;
    server_name example.no www.example.no;

    # Serve static files directly - NO Apache involvement
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        access_log off;
        expires 30d;
        root /var/www/example.no/public_html;
    }

    # Pass everything else to Apache
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        client_max_body_size 10m;
        client_body_buffer_size 128k;
        proxy_connect_timeout 90;
        proxy_send_timeout 90;
        proxy_read_timeout 90;
    }
}
Pro Tip: Watch your worker_processes and worker_connections. A good rule of thumb for 2010 hardware is setting worker_processes to the number of CPU cores you have. Set worker_connections to 1024.

Hardware Matters: The CoolVDS Difference

Software optimization can only take you so far. If your underlying storage subsystem is garbage, your database is going to lock up regardless of your Nginx config. This is where the hosting market in Norway is currently divided.

Many providers are still over-provisioning cheap SATA drives on OpenVZ containers. That means you are fighting for I/O operations with every other customer on that node. If Neighbor A decides to run a massive backup script, your database latency goes through the roof.

At CoolVDS, we use Xen HVM virtualization. This provides true hardware isolation. We don't oversell RAM. More importantly, we are deploying RAID-10 SAS 15k RPM drives and rolling out early Enterprise SSD storage tiers for database-heavy workloads. The difference in random I/O performance is night and day.

Latency and Jurisdiction

For Norwegian businesses, physical location is critical. Routing traffic through Frankfurt or London adds milliseconds of latency that you don't need. CoolVDS servers are located in Oslo, directly peering with NIX (Norwegian Internet Exchange). You get sub-5ms ping times to most Norwegian ISPs.

Furthermore, adhering to the Personal Data Act (Personopplysningsloven) and Datatilsynet guidelines is much simpler when your data physically resides in Norway. Don't risk compliance issues by hosting client data across the Atlantic.

Final Thoughts

Replacing a standalone Apache setup with an Nginx reverse proxy is the single most effective change you can make to lower your Total Cost of Ownership (TCO). You get more concurrent users on the same hardware.

But remember: a fast car needs a smooth road. Don't put a tuned software stack on a choked network.

Ready to see the difference? Deploy a Debian Lenny or Ubuntu Lucid instance on CoolVDS today. Check out our new SSD storage plans if you need raw I/O speed for your MySQL master node.