Console Login
Home / Blog / DevOps & Infrastructure / Surviving the Slashdot Effect: High-Availability Load Balancing with HAProxy on CentOS 6
DevOps & Infrastructure 6 views

Surviving the Slashdot Effect: High-Availability Load Balancing with HAProxy on CentOS 6

@

Surviving the Slashdot Effect: High-Availability Load Balancing with HAProxy on CentOS 6

It’s 3:00 AM. Your phone buzzes. Nagios is screaming that your primary web server is down. You SSH in, only to find the load average hitting 50.0 and Apache processes stuck in a zombie state. Your marketing team did a great job—too great. You just got featured on a major news site, and your single server architecture just folded like a cheap lawn chair.

I’ve been there. Relying on vertical scaling—just adding more RAM or CPU to a single box—is a dead end. Eventually, you hit a hardware ceiling, or worse, a single point of failure.

The solution isn't to buy a $20,000 Cisco hardware load balancer. The solution is HAProxy. It is free, open-source, and frankly, it handles traffic better than most commercial appliances I've worked with. Today, we are going to look at how to set up a robust load balancer on CentOS 6 to distribute traffic across multiple web nodes. This is the exact setup we use to keep uptime high for our clients at CoolVDS.

Why HAProxy?

HAProxy (High Availability Proxy) 1.4 is the current stable standard for a reason. It is a single-threaded, event-driven, non-blocking engine. In plain English? It can handle tens of thousands of concurrent connections without eating up all your RAM.

Unlike Apache, which creates a new thread or process for every connection (eating memory), HAProxy forwards packets with incredible efficiency. It sits in front of your web servers, takes the hit from the internet, and gently distributes valid requests to your backend servers.

Pro Tip: Don't run your load balancer on the same physical disk as your database. Disk I/O contention will kill your throughput. At CoolVDS, we separate these concerns on the hypervisor level, ensuring your network ops don't fight your storage ops.

The Architecture

We are going to build a simple but powerful cluster:

  • Load Balancer (LB01): Public IP, running HAProxy.
  • Web Node A (WEB01): Internal IP, running Apache/Nginx.
  • Web Node B (WEB02): Internal IP, running Apache/Nginx.

The goal? If WEB01 dies, HAProxy automatically routes everything to WEB02. Your users won't even notice.

Configuration Breakdown

First, install HAProxy from the EPEL repository on your CentOS 6 VPS:

yum install haproxy

Now, let's look at the config file located at /etc/haproxy/haproxy.cfg. The defaults are usually garbage for high traffic, so we are going to strip it down and build it up properly.

1. The Global & Defaults Section

This handles process management and default timeouts.

global log 127.0.0.1 local0 maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 timeout connect 5000 timeout client 50000 timeout server 50000

2. The Frontend (The Listener)

This is where traffic comes in. We bind it to port 80.

frontend http-in bind *:80 default_backend web_servers

3. The Backend (The Workers)

This is where the magic happens. We define our balancing algorithm and our servers.

backend web_servers mode http balance roundrobin option httpchk HEAD / HTTP/1.1\r\nHost:localhost server web01 10.0.0.2:80 check server web02 10.0.0.3:80 check

Here is what those flags mean:

  • balance roundrobin: Distributes requests sequentially. First request to web01, second to web02, and so on.
  • option httpchk: This is critical. HAProxy will check your web server is actually serving HTTP 200 OK responses. If the server is "up" (pingable) but Apache is hung, HAProxy detects the error and removes the node from the pool.

Latency Matters: The Norwegian Context

Configuration is only half the battle. The other half is physics. If your load balancer is in Oslo but your web nodes are in a datacenter in Amsterdam, the round-trip time (RTT) between the balancer and the nodes will add significant delay to every single request.

For Norwegian businesses subject to Personopplysningsloven (Personal Data Act), keeping data within national borders is also a massive compliance plus. Datatilsynet is increasingly strict about where customer data lives.

This is where infrastructure choice dictates performance. When you provision VPS instances with CoolVDS, you aren't just getting a virtual slice; you are getting proximity to the NIX (Norwegian Internet Exchange). We ensure that the internal latency between your load balancer and your web nodes is negligible—often under 1ms. You can't configure your way out of bad routing.

Handling "Sticky" Sessions

If you are running an e-commerce site (like Magento or OsCommerce), roundrobin can break shopping carts. If a user adds an item on WEB01, and the next click sends them to WEB02, their session might be lost.

To fix this, inject a cookie to maintain session persistence:

backend web_servers mode http balance roundrobin cookie SERVERID insert indirect nocache server web01 10.0.0.2:80 check cookie s1 server web02 10.0.0.3:80 check cookie s2

Now, HAProxy adds a SERVERID cookie to the user's browser. As long as that cookie exists, they stick to the same server. If that server dies, HAProxy re-routes them to a healthy node automatically.

Monitoring Your Cluster

You can't manage what you can't see. HAProxy 1.4 includes a built-in stats page. Add this to your config:

listen stats *:1936 stats enable stats uri / stats hide-version stats auth admin:supersecretpassword

Visit http://your-ip:1936/ and you get a real-time dashboard showing sessions, downtime, and server health. It's ugly, but it's honest data.

Summary

Moving from a single server to a load-balanced cluster is the most significant step you can take for reliability. It allows you to perform maintenance on one server while the other handles traffic. It lets you sleep at night.

But remember, software is only as fast as the hardware beneath it. Spinning disk drives (HDD) are often the bottleneck in high-concurrency setups. While SSDs are still premium in the enterprise space, moving your database and high-traffic nodes to solid-state storage drastically reduces I/O wait times.

Ready to stop fearing the 3 AM wake-up call? Deploy your load balancer on a network built for stability. Spin up a test instance on CoolVDS today and see the difference low latency makes.

/// TAGS

/// RELATED POSTS

Building a CI/CD Pipeline on CoolVDS

Step-by-step guide to setting up a modern CI/CD pipeline using Firecracker MicroVMs....

Read More →

Taming the Beast: Kubernetes Networking Deep Dive (Pre-v1.0 Edition)

Google's Kubernetes is changing how we orchestrate Docker containers, but the networking model is a ...

Read More →

Stop SSH-ing into Production: Building a Git-Centric Deployment Pipeline

Manual FTP uploads and hot-patching config files are killing your stability. Here is how to implemen...

Read More →

Decomposing the Monolith: Practical Microservices Patterns for Nordic Ops

Moving from monolithic architectures to microservices introduces network complexity and latency chal...

Read More →

Beyond the Hype: Building "NoOps" Microservices Infrastructure in Norway

While Silicon Valley buzzes about AWS Lambda, pragmatic engineers know the truth: latency and vendor...

Read More →

Ditch Nagios: Monitoring Docker Microservices with Prometheus in 2015

Monolithic monitoring tools like Nagios fail in dynamic Docker environments. Learn how to deploy Pro...

Read More →
← Back to All Posts