Stop Letting Apache Die on the Hill of Concurrency
It happens every time. You throw more RAM at the problem, you tweak MaxClients in your httpd.conf, and yet, when traffic spikes—perhaps due to a holiday sale or a mention on a major news outlet—your server load average skyrockets. The site crawls. Connections time out.
The problem isn't your code; it's your architecture. If you are still exposing Apache directly to the public internet in 2012, you are asking for trouble. Apache’s prefork worker model is robust, but it creates a new process for every connection. When a user on a slow 3G mobile connection requests a file, that heavy Apache process sits there, blocked, spoon-feeding data at a few kilobytes per second. It’s a waste of resources that no amount of hardware scaling can fix efficiently.
The solution is not to abandon Apache—it processes PHP and .htaccess files reliably—but to shield it. You need an event-driven buffer. You need Nginx acting as a reverse proxy.
The Architecture: Nginx as the Bouncer
In this setup, Nginx sits on port 80. It handles all incoming connections using its asynchronous, non-blocking architecture. It serves static files (images, CSS, JS) instantly from disk without bothering the backend. For dynamic requests (PHP/Python), it passes the request to Apache (running on port 8080 or localhost), waits for the response, buffers it, and then disconnects from Apache immediately. Nginx then handles the slow delivery to the client.
This frees up your heavy application threads to do what they do best: process logic, not wait on network latency.
Pro Tip: Network latency is a physical reality, especially when serving clients across Europe. While our CoolVDS data center in Oslo connects directly to the NIX (Norwegian Internet Exchange) for sub-millisecond local routing, you cannot control the client's last mile. Nginx absorbs that latency so your database connections don't have to.
The Implementation: CentOS 6 & Nginx 1.2.x
Let’s assume you are running a standard CentOS 6.3 environment. First, install the EPEL repository if you haven't already, as the standard repo versions are ancient.
rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install nginx
Before we touch the config, turn off the "OS default" mindset. The default configs are meant for small sites, not high-performance nodes.
1. Configuring nginx.conf
Open /etc/nginx/nginx.conf. We need to adjust the worker processes and, crucially, the buffer sizes. If your buffers are too small, Nginx writes to temporary files on disk, killing your I/O throughput.
user nginx;
worker_processes auto; # Detects CPU cores automatically
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll; # Essential for Linux 2.6+
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Performance Tuning
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Buffer Management - Stop writing to disk!
client_body_buffer_size 128k;
client_max_body_size 10m;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;
output_buffers 1 32k;
postpone_output 1460;
include /etc/nginx/conf.d/*.conf;
}
2. The Proxy Virtual Host
Now, let's configure the actual site in /etc/nginx/conf.d/example.com.conf. We define an upstream block to handle the connection to the backend.
upstream backend_apache {
server 127.0.0.1:8080;
keepalive 32; # Keep connections open to Apache!
}
server {
listen 80;
server_name example.com www.example.com;
# Serve static assets directly
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
root /var/www/html;
access_log off;
expires 30d;
tcp_nodelay off;
}
# Pass dynamic content to Apache
location / {
proxy_pass http://backend_apache;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Buffer settings specific to proxy
proxy_buffer_size 8k;
proxy_buffers 8 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
# Timeouts
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
}
}
Once configured, restart Nginx via the service manager:
service nginx configtest
service nginx restart
The "I/O Wait" Bottleneck
You can have the most optimized Nginx configuration in the world, but if your underlying storage subsystem is slow, your logs and cache writes will lock up the CPU. This is known as iowait.
In a standard VPS environment where multiple tenants share spinning rust (HDD), a neighbor running a heavy backup script can destroy your site's performance. Nginx relies on non-blocking I/O; if the disk blocks, the worker process stalls.
| Metric | Standard HDD VPS | CoolVDS Enterprise SSD |
|---|---|---|
| Random Read IOPS | ~120 | ~40,000+ |
| Disk Latency | 5-20ms | <0.5ms |
| Nginx Reload Time | 0.5s | Instant |
At CoolVDS, we use KVM virtualization on pure Enterprise SSD arrays. We don't use OpenVZ containerization for high-performance plans because KVM provides a dedicated kernel and strict isolation. When you write to the disk logs on a CoolVDS instance, you aren't fighting for the needle arm of a mechanical drive.
Handling the "X-Forwarded-For" Header
One final "gotcha" when moving to this architecture: your Apache logs will show 127.0.0.1 as the IP for every visitor. This breaks analytics and security tools like Fail2Ban.
To fix this, install mod_rpaf (Reverse Proxy Add Forward) on your Apache backend. It reads the X-Forwarded-For header set by Nginx and restores the real client IP.
# On CentOS/RHEL
yum install mod_rpaf
# Inside /etc/httpd/conf.d/rpaf.conf
RPAFenable On
RPAFsethostname On
RPAFproxy_ips 127.0.0.1
Compliance and Reliability
Operating out of Norway brings specific advantages regarding data integrity. While legal frameworks like the Personopplysningsloven (Personal Data Act) dictate strict handling of user logs, hosting physically in Oslo ensures that your data remains within a jurisdiction known for stability and clear regulation, unlike the murky waters of some overseas providers.
Don't let a default configuration limit your growth. Switch to the Nginx reverse proxy model today.
Ready to test this configuration? Deploy a CoolVDS SSD instance in 55 seconds and see what true dedicated I/O does for your load times.