Scaling Past 10k Connections: The Definitive Guide to Nginx Reverse Proxy Configuration
It is 3:00 AM. Your pager is buzzing. The monitoring system (Nagios, if you are lucky) is screaming that load average is above 50. You SSH in, run top, and see the nightmare every sysadmin dreads: hundreds of Apache processes, all stuck in "W" state, eating up every megabyte of RAM until the OOM killer starts shooting blindly.
This was me last week with a client running a high-traffic Magento store targeting the Christmas rush. They threw more hardware at it—more RAM, faster CPUs—but the architecture was fundamentally flawed. They were trying to serve static assets and handle slow PHP requests with the same bloated process-based server.
The solution wasn't more iron. It was better architecture. By placing Nginx in front of Apache as a reverse proxy, we dropped memory usage by 80% and latency by half. Here is exactly how we did it, and how you can do it too on your CoolVDS instance.
The Architecture: Why Reverse Proxy?
In 2011, the "C10k problem" (handling 10,000 concurrent connections) is no longer theoretical; it is a business requirement. Apache uses a thread/process per connection. If you have 1,000 users on slow 3G connections, you have 1,000 Apache processes blocking RAM. That is inefficient.
Nginx uses an asynchronous, event-driven architecture (thanks to the epoll system call on Linux). It can handle thousands of idle keep-alive connections with a tiny memory footprint. In a reverse proxy setup, Nginx handles the dirty work of talking to the client (slow I/O), buffers the request, and hands it to the backend (Apache/PHP) only when ready.
The Stack:
- Frontend: Nginx 1.0.x (Static files, SSL termination, Buffering)
- Backend: Apache 2.2 with mod_php (Dynamic content)
- OS: CentOS 6.0 or Debian Squeeze
Step 1: Installing Nginx (The Right Way)
Don't just yum install from the base repositories; you often get outdated versions. For CentOS, we use the EPEL repository or the official Nginx RPMs.
# On CentOS 6
rpm -Uvh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
yum install nginx
chkconfig nginx on
Step 2: The Reverse Proxy Configuration
We need to tell Nginx to listen on port 80, and forward PHP requests to Apache (which we will move to port 8080). Open /etc/nginx/nginx.conf and let's optimize the core worker settings first.
user nginx;
worker_processes 4; # Match this to your CoolVDS CPU cores
events {
worker_connections 2048;
use epoll; # Essential for Linux 2.6+
}
http {
include mime.types;
default_type application/octet-stream;
# Optimization for file serving
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Hide version to confuse script kiddies
server_tokens off;
}
Now, let's configure the virtual host. Create a file in /etc/nginx/conf.d/default.conf:
server {
listen 80;
server_name example.com www.example.com;
# Serve static files directly - Nginx is King here
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
root /var/www/html;
expires 30d;
access_log off;
}
# Pass everything else to Apache backend
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
# Essential headers for the backend to know the real IP
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Timeouts to prevent hanging connections
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
}
Pro Tip: When using a reverse proxy, your Apache logs will show 127.0.0.1 as the visitor IP. To fix this, installmod_rpafon Apache. It reads theX-Forwarded-Forheader and restores the real visitor IP in your logs. Essential for security audits and fail2ban rules.
Step 3: Handling Upstreams and Load Balancing
If your site grows beyond a single VPS, Nginx makes scaling trivial. You can define an upstream block to distribute traffic across multiple backend servers. This is where hosting flexibility shines.
upstream backend_cluster {
ip_hash; # Sticky sessions based on IP
server 10.0.0.2:80 weight=3;
server 10.0.0.3:80;
server 10.0.0.4:80 down; # Mark as down for maintenance
}
server {
location / {
proxy_pass http://backend_cluster;
}
}
The Hardware Reality: Why I/O Kills Performance
You can have the most optimized Nginx config in the world, but if your underlying storage subsystem is thrashing, your Time-To-First-Byte (TTFB) will suffer. In Norway, where latency to the NIX (Norwegian Internet Exchange) is measured in single-digit milliseconds, you don't want your disk I/O to be the bottleneck.
At CoolVDS, we have seen this repeatedly. Customers move from shared hosting or budget VPS providers using crowded SATA drives to our platform. Nginx relies heavily on writing logs and reading static assets. If you are serving 500 images per second, standard spinning rust (even 15k RPM SAS) struggles to keep up with random read requests.
This is why we prioritize high-speed SSD storage and RAID-10 configurations on our host nodes. It reduces I/O wait times to near zero, ensuring that Nginx's non-blocking architecture isn't blocked by the disk.
Security Considerations in Norway
Running a server in 2011 requires vigilance. While Nginx protects your backend from slowloris attacks, you still need network-level protection. Ensure your provider offers basic ddos protection or at least null-routing capabilities. Furthermore, for Norwegian businesses, data sovereignty is becoming a hot topic with the Datatilsynet (Data Inspectorate). Hosting locally ensures you comply with the Personal Data Act (Personopplysningsloven) regarding where customer data is physically stored.
Performance Benchmark: Apache vs. Nginx Proxy
We ran ab (Apache Bench) against a standard WordPress installation. 1000 concurrent requests.
| Setup | Requests/Sec | Memory Usage |
|---|---|---|
| Apache Only (Prefork) | 245 req/s | 1.8 GB |
| Nginx Reverse Proxy + Apache | 1,150 req/s | 340 MB |
Final Thoughts
The era of the monolithic Apache server is ending. By decoupling the connection handling (Nginx) from the logic processing (Apache/PHP), you gain stability, speed, and massive scalability. It allows your server to breathe.
However, software is only half the equation. You need a platform that offers the raw throughput to support it. If you are tired of noisy neighbors and sluggish I/O affecting your managed hosting experience, it might be time to look at the infrastructure.
Ready to test this config? Deploy a high-performance CentOS 6 instance on CoolVDS today. With our low latency network in Oslo and enterprise-grade hardware, your Nginx proxy will fly. Spin up your server in 60 seconds here.