Your Single Server is a Single Point of Failure
It’s 2:00 AM on a Tuesday. Your monitoring system starts screaming. You check top and see load averages climbing past 20. Apache processes are maxing out, swapping is killing your disk I/O, and your site is effectively dead. Why? Because a popular blog just linked to you, or your marketing team launched a campaign without telling IT.
If you are running a mission-critical application on a single VPS, you are gambling with uptime. Hardware fails. Services hang. Traffic spikes happen.
In the enterprise world, big shops throw money at the problem with hardware load balancers like F5 BigIPs, costing upwards of $20,000. But for the rest of us building on the Linux stack in 2009, there is a better, open-source way: HAProxy.
The Architecture: Decoupling the Front Door
Instead of pointing your DNS A-record directly to your web server, you point it to a load balancer. This server does nothing but shuffle packets. It’s lightweight, incredibly fast, and very stable.
Here is the setup we deployed last week for a media client in Oslo expecting heavy traffic coverage of the election:
- Node 1 (LB): HAProxy on a minimal CentOS 5.3 VPS (CoolVDS Small Instance).
- Node 2 (Web A): Apache 2.2 serving the application.
- Node 3 (Web B): Apache 2.2 (Identical clone of Web A).
If Web A dies, HAProxy detects it instantly and routes all traffic to Web B. No downtime. No pager duty wake-up calls.
Configuring HAProxy 1.3 for Stability
Installation on CentOS is straightforward via the EPEL repository, or compiling from source (recommended for version 1.3.15+).
yum install haproxy
The magic happens in /etc/haproxy/haproxy.cfg. Most defaults are garbage for high traffic. Here is a production-ready config block that prioritizes connection survival:
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen webfarm 0.0.0.0:80
mode http
stats enable
stats auth admin:password
balance roundrobin
cookie JSESSIONID prefix
option httpclose
option forwardfor
option httpchk HEAD /check.txt HTTP/1.0
server web01 192.168.10.11:80 cookie A check
server web02 192.168.10.12:80 cookie B check
Key Configuration Breakdown
- balance roundrobin: Distributes requests evenly. Essential when your backend servers have identical specs (like CoolVDS standard instances).
- option httpchk: This is critical. HAProxy pings
check.txton your web servers. If Apache hangs but the server is essentially 'up', a TCP check might pass, but an HTTP check will fail, correctly removing the node from rotation. - option forwardfor: Passes the client's true IP address in the
X-Forwarded-Forheader so your Apache logs don't just show the load balancer's IP.
The Hardware Factor: Why "Virtual" Doesn't Mean "Fake"
Virtualization has matured. We aren't stuck with unstable jails anymore. However, a load balancer is sensitive to network latency and jitter. If your host oversubscribes the CPU, HAProxy stalls, and everyone waits.
This is why we standardized on CoolVDS for this architecture. They use Xen paravirtualization (PV), which offers better isolation than OpenVZ. More importantly, their location in Norway ensures low latency to the NIX (Norwegian Internet Exchange). When you are balancing traffic, adding 30ms of latency because your server is in a budget datacenter in Texas kills the user experience.
Pro Tip: Under the Norwegian Personal Data Act (Personopplysningsloven), keeping data within the EEA is often required for compliance. Hosting your load balancer and web nodes on CoolVDS keeps the Datatilsynet happy and your latency low.
Handling the Database
Load balancing web servers is easy. The database is the bottleneck. In this setup, both Web A and Web B talk to a single MySQL backend. To scale that, you need Master-Slave replication, which we will cover in a future article.
For now, ensure your MySQL server is on a separate CoolVDS instance with RAID-10 SAS storage. Do not run MySQL on the same node as a web server if you expect high load. Disk I/O contention will destroy performance faster than CPU exhaustion.
Final Verification
Don't assume it works. Test it. Use Apache Bench (ab) from a local terminal to hammer your VIP (Virtual IP):
ab -n 1000 -c 50 http://your-load-balancer-ip/
While the test runs, shut down Apache on Node 1 (service httpd stop). Watch the HAProxy stats page. You should see the status turn RED for web01, but your ab test should complete with zero failed requests. That is the power of a cluster.
Stop relying on luck. Deploy a redundant architecture today. Spin up your load balancer on CoolVDS and sleep through the next traffic spike.