Console Login
Home / Blog / Server Administration / Surviving the Slashdot Effect: High Availability with HAProxy 1.4 on Linux
Server Administration 8 views

Surviving the Slashdot Effect: High Availability with HAProxy 1.4 on Linux

@

Stop Praying for Uptime. Engineer It.

It happens every time. Marketing launches a campaign, a link hits the front page of Digg or a popular Norwegian news site like VG.no, and suddenly your server load spikes to 50. SSH becomes unresponsive. Apache processes deadlock. The site goes dark.

If you are still running a high-traffic e-commerce site or a media portal on a single LAMP stack box, you are gambling with your revenue. In 2010, hardware failure isn't a question of if, but when. And traffic spikes? They are the goal, not the enemy.

I've spent the last week debugging a Magento setup that crumbled under 500 concurrent users. The fix wasn't more RAM. It was architecture. Specifically, putting HAProxy 1.4 in front of a cluster of web nodes.

Why HAProxy?

Hardware load balancers like F5 Big-IP are fantastic if you have $20,000 to burn. For the rest of us, there is HAProxy. It is free, open-source, and efficiently handles tens of thousands of concurrent connections without eating your CPU alive. Unlike Apache, which is process-heavy (especially with mod_php), HAProxy uses an event-driven model.

With the release of version 1.4 earlier this year, we finally got native client-side keep-alive support, making it a viable alternative to Nginx for reverse proxying.

The Architecture

Here is the setup we deploy for clients needing 99.99% uptime:

  • Frontend: 2x CoolVDS KVM instances running HAProxy + Keepalived (for VRRP IP failover).
  • Application Layer: 2+ Apache/PHP web servers.
  • Database: MySQL Master-Slave replication.

Configuration: The Meat and Potatoes

Let's assume you are running Ubuntu 10.04 LTS (Lucid Lynx). First, grab the package.

apt-get install haproxy

Out of the box, the config is sparse. Here is a battle-tested /etc/haproxy/haproxy.cfg snippet for a standard web cluster. We are using Layer 7 (HTTP) balancing here to allow for sticky sessions—crucial for shopping carts.

global
    log 127.0.0.1   local0
    maxconn 4096
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    maxconn 2000
    contimeout 5000
    clitimeout 50000
    srvtimeout 50000

frontend http_front
    bind *:80
    default_backend web_cluster

backend web_cluster
    mode http
    balance roundrobin
    option httpchk HEAD /health_check.php HTTP/1.0
    cookie SERVERID insert indirect nocache
    server web01 10.0.0.2:80 check cookie s1
    server web02 10.0.0.3:80 check cookie s2
Pro Tip: Notice the option httpchk line? Don't just check port 80. Create a small PHP script (health_check.php) that tries to connect to the database. If MySQL is down, the web server should report a failure so HAProxy removes it from rotation. A zombie web server is worse than a dead one.

Latency and Location: The Norwegian Context

You can have the best load balancer config in the world, but if your packets are routing through Frankfurt to get from Oslo to Bergen, your users will feel it. Latency kills conversion rates.

This is where infrastructure choice matters. When we provision nodes on CoolVDS, we are sitting directly on the Norwegian Internet Exchange (NIX). Pings to Telenor or NextGenTel users are often in the single digits.

Furthermore, under the Personopplysningsloven (Personal Data Act), hosting customer data within Norwegian borders simplifies legal compliance significantly compared to hosting in the US. The Datatilsynet is becoming increasingly strict about where sensitive data resides.

Virtualization Matters: OpenVZ vs. KVM

Not all VPSs are created equal. Many budget hosts use OpenVZ, which allows them to oversell the kernel. If a neighbor on the same physical node gets hit with a DDoS, your HAProxy instance will stutter because you don't have guaranteed CPU time.

For a load balancer, jitter is unacceptable. This is why we stick to CoolVDS. They use KVM (Kernel-based Virtual Machine) virtualization. It treats your VPS like a real server with its own kernel and dedicated memory allocation. When you are balancing 5,000 requests per second, you need that raw stability.

The Bottom Line

Redundancy is expensive, but downtime is more expensive. By splitting your traffic across two humble web servers behind an HAProxy node, you eliminate the single point of failure that keeps SysAdmins awake at night.

Don't wait for the crash. Spin up a KVM instance on CoolVDS today, install HAProxy, and sleep better knowing your infrastructure can take a punch.

/// TAGS

/// RELATED POSTS

Surviving the Spike: High-Performance E-commerce Hosting Architecture for 2012

Is your Magento store ready for the holiday rush? We break down the Nginx, Varnish, and SSD tuning s...

Read More →

Automate or Die: Bulletproof Remote Backups with Rsync on CentOS 6

RAID is not a backup. Don't let a typo destroy your database. Learn how to set up automated, increme...

Read More →

Nginx as a Reverse Proxy: Stop Letting Apache Kill Your Server Load

Is your LAMP stack choking on traffic? Learn how to deploy Nginx as a high-performance reverse proxy...

Read More →

Apache vs Lighttpd in 2012: Squeezing Performance from Your Norway VPS

Is Apache's memory bloat killing your server? We benchmark the industry standard against the lightwe...

Read More →

Stop Guessing: Precision Server Monitoring with Munin & Nagios on CentOS 6

Is your server going down at 3 AM? Stop reactive fire-fighting. We detail the exact Nagios and Munin...

Read More →

The Sysadmin’s Guide to Bulletproof Automated Backups (2012 Edition)

RAID 10 is not a backup strategy. In this guide, we cover scripting rsync, rotating MySQL dumps, and...

Read More →
← Back to All Posts