The "Out of Memory" Nightmare
We have all been there. You launch a campaign, or maybe you get lucky and hit the front page of Digg. Traffic spikes. Suddenly, your SSH session lags. Your monitoring alerts stop sending because the mail queue is choked. Then, silence. Your Apache `MaxClients` limit was hit, the server started swapping to disk, and the OOM killer started sacrificing processes like a panicked deity.
If you are running a business-critical application in 2011, relying on a single server is not a strategy; it is a gamble. The solution is horizontal scaling, and the weapon of choice is HAProxy.
While hardware load balancers like F5 Big-IP cost more than a luxury car, HAProxy delivers equivalent performance on commodity hardware—or better yet, on a high-performance VPS.
Why HAProxy?
Unlike Apache, which creates a thread or process for every connection (eating RAM for breakfast), HAProxy uses an event-driven, single-process model. It can handle tens of thousands of concurrent connections without eating your server's memory. This is why we use it at CoolVDS to route traffic for our own infrastructure.
The Architecture
We are going to move from a single point of failure to a robust cluster:
- Load Balancer: One CoolVDS instance running HAProxy (CentOS 5.5).
- Web Nodes: Two (or more) Apache/Nginx backends serving your PHP/Python app.
- Database: A separate MySQL master (but that is a topic for another article).
Configuration: The "Meat" of the Setup
First, install HAProxy. On RHEL/CentOS 5, you might need the EPEL repository, as the base repo often lags behind.
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
yum install haproxyNow, let’s look at a production-ready `haproxy.cfg`. This is not the default config; this is tuned for survival.
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen webfarm 0.0.0.0:80
mode http
stats enable
stats uri /haproxy?stats
balance roundrobin
cookie JSESSIONID prefix
option httpclose
option forwardfor
server web01 192.168.1.10:80 cookie A check
server web02 192.168.1.11:80 cookie B check
Pro Tip: Notice `option forwardfor`. Without this, your backend web servers will see the IP address of the load balancer instead of the actual visitor. This inserts the `X-Forwarded-For` header so your analytics logs don't look like all your traffic is coming from localhost.
The SSL Elephant in the Room
Here is the hard truth: HAProxy 1.4 does not support native SSL termination. I see developers confused by this constantly. If you need HTTPS (and you should), you have two options in 2011:
- Stunnel: Run `stunnel` on the load balancer to decrypt traffic on port 443 and pass it to HAProxy on localhost.
- Nginx Frontend: Use Nginx for SSL offloading and proxy the cleartext to HAProxy.
In high-load environments, we prefer the Nginx approach. It handles the SSL handshake efficiently before passing the request to the HAProxy logic.
The "Norwegian" Context: Latency and Laws
When you are balancing traffic, network latency becomes your new bottleneck. Every request hits your balancer, then the backend, then back. If your VPS provider is overselling bandwidth or routing your Oslo traffic through a datacenter in Frankfurt or Amsterdam, you are adding 40ms to every round trip unnecessarily.
This is where CoolVDS differs from the budget hosts. Our infrastructure is peered directly at NIX (Norwegian Internet Exchange). If your target audience is in Norway, your packets stay in Norway. This isn't just about speed; it's about Datatilsynet compliance. Under the Personal Data Act (Personopplysningsloven), knowing exactly where your data physically resides is paramount for compliance.
Hardware Matters: IOPS or Death
Even the best load balancer config cannot save you if your disk I/O is thrashing. Many VPS providers put you on crowded SATA nodes. When a neighbor starts a backup, your load balancer chokes on logging.
At CoolVDS, we are rolling out RAID10 SAS 15k RPM storage arrays and experimenting with early-generation Enterprise SSD caching. We isolate I/O so your load balancer can write logs and handle state without waiting on a spinning disk.
War Story: The Magento Meltdown
Last month, we helped a client migrating a Magento store. They were using a standard `roundrobin` balance. Users were adding items to their cart, refreshing the page, and the cart would empty. Why? Because the request hit a different server that didn't have the PHP session file.
The Fix: We switched to `cookie JSESSIONID prefix` (as seen in the config above). This ensures sticky sessions—HAProxy injects a cookie so the user sticks to `web01` unless that server dies. Uptime was saved, sales recovered.
Final Thoughts
Load balancing is not just for Google or Facebook. With tools like HAProxy and solid VPS hosting, you can build a fault-tolerant architecture today. Don't wait for your server to crash to realize you need redundancy.
Need a sandbox to test your cluster? Deploy a CoolVDS instance in Oslo. We offer genuine Xen virtualization (no noisy neighbors stealing your CPU) and the lowest latency to the Norwegian market.