Stop Praying for Uptime, Architect for It
It starts with a slow page load. Then the SSH session starts lagging. Finally, your monitoring screams red and your client calls you in a panic. Their Magento store is down right in the middle of a summer campaign. You log in via console, run top, and see the horror: Load Average: 45.00. Apache has eaten all the RAM, the swap partition is thrashing, and the kernel OOM-killer is shooting processes at random.
I’ve been there. Relying on a single robust server is a gamble you will eventually lose. Hardware fails. PHP scripts hang. Traffic spikes—the dreaded "Slashdot effect"—don't care about your sleep schedule.
The solution isn't just "buying a bigger server." That’s vertical scaling, and it has a ceiling. The answer is horizontal scaling using a Load Balancer. While hardware appliances like F5 BigIP cost as much as a car, the open-source community has given us something better, leaner, and more flexible: HAProxy.
Why HAProxy 1.4?
HAProxy (High Availability Proxy) stands between the internet and your backend web servers. It distributes traffic so no single node gets overwhelmed. Version 1.4 was released earlier this year (Feb 2010), and it’s the production standard we recommend at CoolVDS.
Unlike Apache, which uses a threaded or prefork model that consumes heavy memory for every connection, HAProxy uses an event-driven, single-process model. It can handle 10,000 concurrent connections without breaking a sweat, occupying minimal RAM. It strips the connection overhead before passing the request to your heavy backend application servers.
The Setup: Round Robin on CentOS 5
Let’s assume you have two backend web servers running Apache and one load balancer node. We are using CentOS 5.5 for this setup because of its long-term stability.
First, install HAProxy on your load balancer node. It's in the EPEL repository:
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
yum install haproxy
Now, we configure /etc/haproxy/haproxy.cfg. The goal is to distribute traffic equally using the Round Robin algorithm. This simply passes requests to Server A, then Server B, then back to A.
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen webfarm 0.0.0.0:80
mode http
stats enable
stats uri /haproxy?stats
balance roundrobin
option httpclose
option forwardfor
server web01 192.168.1.10:80 check
server web02 192.168.1.11:80 check
A few key flags here:
option httpclose: Critical for PHP backends. It closes the connection after the response, freeing up the Apache slot immediately.option forwardfor: Appends theX-Forwarded-Forheader so your backend logs see the real client IP, not the load balancer's IP.check: HAProxy will ping port 80. Ifweb01dies, it automatically routes all traffic toweb02. Zero downtime.
Enable it to start at boot:
chkconfig haproxy on
service haproxy start
The Infrastructure Reality: Latency and IOPS
Software configuration is only half the battle. The underlying hardware matters. I've seen perfectly configured HAProxy clusters fail because the underlying VPS provider was overselling the physical CPU or disk I/O.
At CoolVDS, we don't play those games. We utilize Xen virtualization. Unlike OpenVZ, which shares a kernel and can suffer from "noisy neighbors," Xen provides strict resource isolation. When you buy 512MB RAM and 2 Cores on CoolVDS, they are yours.
Pro Tip: For database backends (MySQL), disk speed is the bottleneck. While standard SATA drives are fine for logs, ask support about our RAID-10 15k RPM SAS arrays or our new experimental SSD tiers if you are doing heavy writes.
Norwegian Compliance and Latency
For those of us operating out of Oslo or serving Nordic clients, location is paramount. Hosting in the US might save you a few kroner, but the 150ms latency kills the user experience. Furthermore, under the Personal Data Act (Personopplysningsloven), keeping sensitive user data within the EEA (or specifically Norway) simplifies your compliance with the Datatilsynet regulations.
CoolVDS servers are peered directly at NIX (Norwegian Internet Exchange). This means if your customer is on Telenor or NextGenTel, the traffic barely leaves the city. Low latency means faster TCP handshakes, which means your site feels instant.
Next Steps
Don't wait for your site to crash during the next traffic spike. Redundancy is the only insurance that actually works.
Spin up three small Xen instances on CoolVDS today—one balancer, two web nodes—and test this configuration. If you need help tuning your sysctl.conf for high concurrency, our support team is fluent in Linux, not just sales scripts.