Scaling Beyond Apache: The Nginx Reverse Proxy Guide for High-Traffic Norwegian Sites
If you are running a high-traffic media site or an e-commerce store in Norway, you have likely hit the dreaded wall: Apache MaxClients reached. You throw more RAM at the problem, increase the limit in httpd.conf, and 24 hours later, the server is swapping itself to death again. It is a vicious cycle.
I saw this just last week with a client hosting a popular Oslo-based forum. Their quad-core server was crawling because every image request spawned a heavy Apache process consuming 25MB of RAM. With 500 concurrent users, the math simply doesn't work.
The solution isn't just "buy a bigger server." The solution is architecture. Specifically, placing Nginx in front of Apache as a reverse proxy.
The Event-Driven Advantage
Apache uses a process-based (or thread-based) model. It requires a dedicated worker for every connection. Nginx is different; it is event-driven and asynchronous. It can handle thousands of concurrent connections with a tiny memory footprint.
By using Nginx as the front-end, it handles all the slow clients and serves static files (images, CSS, JS) instantly. It only passes dynamic PHP requests to the heavy backend (Apache) when absolutely necessary.
Configuration: The Holy Grail Setup
We are going to assume you are running CentOS 5.x. Here is the battle-tested configuration we use at CoolVDS for our high-performance managed clients.
1. Reconfigure Apache
First, move Apache to port 8080. Edit /etc/httpd/conf/httpd.conf:
Listen 127.0.0.1:8080
We bind to localhost so no one can bypass Nginx.
2. The Nginx Proxy Block
Install Nginx (ensure you have the EPEL repository enabled). Then, create a virtual host configuration. The magic lies in the proxy_pass directive and correct header forwarding.
Without forwarding headers, Apache will think all traffic is coming from 127.0.0.1, breaking your logs and IP-based security.
server {
listen 80;
server_name example.no www.example.no;
# Serve static files directly - FAST
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires 30d;
root /var/www/html;
}
# Pass dynamic content to Apache
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Hardware Matters: The Disk I/O Bottleneck
Configuration is only half the battle. When Nginx buffers requests or serves static files, it relies heavily on disk I/O. If your VPS provider puts you on an overloaded node with standard SATA drives, your iowait will skyrocket during backups or peak hours.
This is where infrastructure choice becomes critical for Norwegian businesses compliant with Personopplysningsloven. You need low latency to the NIX (Norwegian Internet Exchange) and hardware that doesn't steal CPU cycles.
Pro Tip: Check your disk latency. Runioping(if available) or a simpleddtest. If write speeds are under 50MB/s, your database will lock up during traffic spikes regardless of your Nginx config.
The CoolVDS Difference
At CoolVDS, we don't believe in the "noisy neighbor" effect. While many budget hosts oversubscribe their spindle drives, we are rolling out Enterprise SSD storage across our virtualization clusters. For a database-heavy CMS like Joomla or Drupal, the difference between random I/O on a 15k RPM SAS drive versus an SSD is night and day.
Furthermore, because we utilize KVM (Kernel-based Virtual Machine) rather than just container-based virtualization, your kernel resources are guaranteed. This isolation is vital for security and consistent performance.
Summary of Benefits
| Feature | Apache Only | Nginx Reverse Proxy on CoolVDS |
|---|---|---|
| Memory Usage | High (Prefork) | Low (Event-driven) |
| Static Files | Slow, blocking | Instant, non-blocking |
| Concurrent Users | Limited by RAM | Limited by Bandwidth |
Final Thoughts
Switching to an Nginx reverse proxy setup allows you to squeeze 3x to 5x more traffic out of your existing VPS plan. It is the industry standard for a reason. But remember, software can't fix bad hardware.
If you are tired of unexplained slowdowns and want to see what your site feels like on low-latency, SSD-accelerated infrastructure located right here in the region, it is time to upgrade.
Don't let slow I/O kill your rankings. Deploy a high-performance CentOS instance on CoolVDS in 55 seconds.