Console Login

Stop Apache Trashing: High-Performance Nginx Reverse Proxy Setup on CentOS 5

The Apache Memory Leak Nightmare

It’s 2:00 AM. You receive a text from your monitoring system. Your server load is sitting at 35.0. You SSH in, run top, and see the horror: httpd processes are eating 95% of your RAM, pushing the system into swap death.

This is the classic C10k problem. Apache 2.2 with the Prefork MPM creates a new process for every connection. If you have a client on a slow EDGE connection in rural Norway grabbing a 2MB image, that heavy Apache process sits there, blocked, consuming 30MB of RAM for 45 seconds. It’s inefficient. It’s expensive.

You don't need a bigger server. You need a smarter architecture. You need Nginx.

The Architecture: Nginx as the Bouncer

We are not replacing Apache today. Your developers love .htaccess files and PHP modules that rely on Apache. We are putting Nginx in front of it.

Nginx is event-driven. It doesn't use a process per connection. It handles thousands of connections in a single loop. In this setup, Nginx handles all the heavy lifting—serving static files (images, CSS, JS) and buffering slow client connections. It only passes the lean, fast PHP requests to Apache on the backend.

Step 1: Install Nginx (The EPEL Way)

Don't compile from source unless you need specific patches. On CentOS 5, use the EPEL repository or the Dag Wieers repo.

yum install nginx

Step 2: Configure the Proxy Pass

We need to tell Nginx to listen on port 80 and forward PHP requests to Apache (which we will move to port 8080). Here is a battle-tested configuration for /etc/nginx/nginx.conf used on CoolVDS production nodes.

server {
    listen       80;
    server_name  www.your-domain.no;

    # Serve static files directly - Massive RAM saver
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        root   /var/www/html;
        expires 30d;
    }

    # Pass dynamic content to Apache
    location / {
        proxy_pass         http://127.0.0.1:8080;
        proxy_redirect     off;
        
        # Essential headers for the backend
        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

        # Buffer settings to prevent IO blocking
        client_max_body_size       10m;
        client_body_buffer_size    128k;
        proxy_connect_timeout      90;
        proxy_send_timeout         90;
        proxy_read_timeout         90;
        proxy_buffer_size          4k;
        proxy_buffers              4 32k;
        proxy_busy_buffers_size    64k;
        proxy_temp_file_write_size 64k;
    }
}
Pro Tip: On the Apache side, install mod_rpaf. Without it, Apache sees all traffic coming from 127.0.0.1 (localhost) instead of the real user IP. This breaks your analytics and security logs.

Why Hardware Isolation Matters

Software optimization can only go so far. I've seen perfectly tuned Nginx configs fail because the host node was oversold. In the VPS market, "burst RAM" is a lie. If your neighbor on the physical node decides to compile a kernel, your disk I/O waits.

This is where architecture counts. At CoolVDS, we use Xen virtualization. Unlike OpenVZ, Xen provides hard memory and swap isolation. When you define a buffer in Nginx, you need to know that RAM actually exists.

Feature Generic OpenVZ VPS CoolVDS Xen VPS
Memory Shared/Burst (Unstable) Dedicated/Reserved
Disk I/O SATA II (Often crowded) Enterprise SAS RAID-10
Kernel Shared Dedicated (Customizable)

Norwegian Latency and Compliance

If your target audience is in Oslo, Bergen, or Trondheim, latency matters. Routing traffic through a datacenter in Texas adds 150ms of lag. That makes your site feel sluggish regardless of your Nginx config. Keep your metal close to your users.

Furthermore, consider the Data Inspectorate (Datatilsynet). While the Personopplysningsloven (Personal Data Act) doesn't strictly forbid hosting abroad, keeping data within the EEA/Norway simplifies compliance significantly for local businesses.

Final Check

Before you restart, test your syntax:

service nginx configtest

If it says OK, flip the switch. Watch your load average drop from 20.0 to 0.5. It’s not magic; it’s engineering.

Need a sandbox to test this setup without breaking production? Spin up a CoolVDS instance. We use 15k RPM SAS drives in RAID-10, so your logs write as fast as Nginx can serve them.