Console Login

Scaling Past the Single Server: High-Availability Load Balancing with HAProxy on CentOS 6

The Myth of the "Unlimited" Single Server

I still see it every week. A promising startup in Oslo buys a massive dedicated server, piles Apache, MySQL, and PHP onto the same metal, and crosses their fingers. They think 32GB of RAM makes them invincible. It doesn't. I have seen servers with 128GB of RAM grind to a halt because a single unoptimized query locked the MyISAM tables, or because Apache processes ate every scrap of CPU during a marketing push.

If you are serious about uptime, horizontal scaling isn't an option; it's a requirement. You need to decouple your traffic ingress from your application logic. In 2012, hardware load balancers like F5 Big-IP or Citrix NetScaler are still the standard for banks, but for the rest of us? They are overkill, overpriced, and inflexible.

Enter HAProxy. It is free, open-source, and frankly, it routes packets faster than most proprietary hardware. I have pushed HAProxy 1.4 to saturation on a dual-core VPS and it didn't even blink. Today, I'm going to show you how to set up a bulletproof load balancing layer that keeps your latency low and your uptime high, specifically tailored for the Nordic infrastructure landscape.

Why HAProxy Beats Nginx for Balancing (Right Now)

I know what you're thinking. "Why not just use Nginx upstream module?" Nginx is fantastic. I use it on every web node. But as a pure load balancer, HAProxy 1.4 still has the edge in observability and granular health checking. It allows us to view the status of every backend server in real-time via a web interface, and its queue management mechanisms are superior when backends get saturated.

Pro Tip: Never expose your database directly to the internet. Your architecture should look like this: Public VIP -> HAProxy -> Private Network -> Web Nodes -> Database Cluster. If you are hosting on CoolVDS, utilize the private VLANs to keep backend traffic unmetered and secure from the public interface.

Step 1: The Environment & Kernel Tuning

We are deploying this on CentOS 6.3. Before we even touch the application, we need to prep the kernel. Linux defaults are conservative; they assume you are running a desktop, not a packet shovel. If you don't tune your sysctl, you will run out of ephemeral ports or hit connection tracking limits during a DDoS.

Open /etc/sysctl.conf and add these lines. This opens up the port range and enables faster recycling of TIME_WAIT sockets—crucial for high-traffic balancers.

# /etc/sysctl.conf
# Increase system file descriptor limit
fs.file-max = 100000

# Allow more connections to be handled
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_max_tw_buckets = 262144

# Reuse sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

# Ephemeral port range
net.ipv4.ip_local_port_range = 1024 65535

Apply these changes with sysctl -p.

Step 2: Installing HAProxy 1.4

The standard repositories are often outdated. Ensure you have the EPEL (Extra Packages for Enterprise Linux) repository enabled to get a reasonably recent version.

yum install epel-release
yum install haproxy
chkconfig haproxy on

Step 3: The Configuration Strategy

This is where the magic happens. We want to configure HAProxy to handle HTTP traffic, offload simple checks, and distribute load based on who is least busy.

Edit /etc/haproxy/haproxy.cfg. I’m stripping out the defaults for clarity, but here is a production-ready block for a standard web cluster.

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    # Spread checks to avoid spikes
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  main_http_in
    bind *:80
    # ACL example: separate static assets if needed
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    default_backend             app_servers

backend app_servers
    mode        http
    balance     leastconn
    # The "inter 2000" means check health every 2 seconds
    server  web01 10.0.0.2:80 check inter 2000 rise 2 fall 5
    server  web02 10.0.0.3:80 check inter 2000 rise 2 fall 5
    
    # Enable the stats page (Password protect this!)
    stats enable
    stats uri /haproxy?stats
    stats authadmin:SuperSecurePass123

Algorithm Choice: Round Robin vs. Leastconn

Most tutorials tell you to use roundrobin. They are wrong. For web applications (PHP/Python/Ruby), requests are not equal. One request might be a 5ms static file hit; the next might be a 500ms report generation. If you use round robin, you risk piling heavy requests onto the same server.

Use leastconn. It routes new traffic to the server with the fewest active connections. This naturally balances the load based on actual server capacity rather than just counting requests.

Latency and The Nordic Context

If your target audience is in Norway, latency matters. Routing traffic through a data center in Frankfurt or Amsterdam adds 20-30ms to your Round Trip Time (RTT). That doesn't sound like much, but with the TCP handshake and SSL negotiation, it adds up to a sluggish feeling for the end user.

Hosting locally in Norway, or at least on high-quality transit near the NIX (Norwegian Internet Exchange), is critical. When you use CoolVDS, you aren't just getting a virtual slice; you are getting proximity to the backbone. We see ping times to Oslo DSL lines as low as 4-8ms.

Furthermore, we have to talk about compliance. The Norwegian Personopplysningsloven (Personal Data Act) and the EU Data Protection Directive are strict. If you are handling sensitive customer data, keeping that data on servers physically located within the EEA (and ideally Norway) simplifies your legal standing with Datatilsynet. Using a US-based cloud provider often introduces Safe Harbor complexities that a local pragmatic CTO would rather avoid.

Storage IO and the Bottleneck

Your load balancer is CPU and Network bound. However, your backend web servers are almost always I/O bound. This is where the hardware underneath your VPS matters.

In 2012, standard hosting is still dominated by 15k RPM SAS drives in RAID 10. They are reliable, but they seek. When your MySQL database gets hit by random reads, those mechanical arms can't keep up.

At CoolVDS, we are aggressive about adopting Solid State technology. While the industry is buzzing about the new NVMe storage specification (PCIe-based flash), mass adoption is still on the horizon. However, we already utilize enterprise-grade SSDs in our host nodes. This reduces I/O latency from milliseconds to microseconds. When your backend servers are on SSD, they clear requests faster, meaning your HAProxy queue stays empty.

High Availability via Keepalived

There is a flaw in the setup I described above: The Load Balancer itself is a Single Point of Failure (SPOF). If the HAProxy VPS dies, your site goes dark.

The solution is a Floating IP (VIP) managed by Keepalived using VRRP. You set up two HAProxy nodes: Master and Backup.

# /etc/keepalived/keepalived.conf (On Master)
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 101
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.50
    }
}

If the Master stops broadcasting VRRP packets, the Backup node (priority 100) instantly assumes the IP 192.168.10.50. Your users never know the difference.

Conclusion

Building a scalable infrastructure isn't about buying the biggest server; it's about smart architecture. By placing HAProxy in front of your web tier, you gain stability, maintenance flexibility (you can take one web node down without downtime), and better performance.

Don't let legacy rotating rust storage or distant data centers kill your user experience. If you need a sandbox to test this configuration, deploy a CoolVDS instance today. With our pure SSD storage and low-latency network, your haproxy.cfg will finally have the hardware it deserves.