Scaling Past the Limit: High-Availability Load Balancing with HAProxy 1.4
It starts with a slow page load. Then, a timeout. Finally, your Apache error logs fill up with MaxClients reached, and your phone starts buzzing at 3:00 AM. If you are running a high-traffic site in Norway—whether it's a growing e-commerce store or a busy vBulletin forum—relying on a single server is a suicide mission.
Many admins try to solve this by vertically scaling: upgrading to a larger VPS with more RAM. That works, until it doesn't. The real solution is horizontal scaling. Today, I’m going to show you how to sit a robust load balancer in front of your web nodes using HAProxy 1.4. This is the exact setup we use to keep uptime at 99.99% for mission-critical clients.
The Architecture: Why HAProxy?
HAProxy (High Availability Proxy) is the de-facto standard for open-source load balancing. It is incredibly stable and can push tens of thousands of concurrent connections without breaking a sweat. Unlike hardware load balancers (F5 Big-IP) that cost more than a nice car, HAProxy runs beautifully on a lean Linux VPS.
In this setup, we will use three CoolVDS instances:
- Load Balancer (LB01): Runs HAProxy. Public facing IP.
- Web Node A (Web01): Runs Apache/Nginx. Private Network IP.
- Web Node B (Web02): Runs Apache/Nginx. Private Network IP.
Pro Tip: Network latency matters. When choosing a provider, ensure they offer a low-latency private backend network. At CoolVDS, our internal switching ensures that traffic between your Load Balancer and Web Nodes stays within the datacenter, keeping latency virtually non-existent. This is crucial for performance.
Step 1: Installing HAProxy
Assuming you are on a CentOS 5.5 or 6 box (standard enterprise choice), let's grab the package. It is often in the EPEL repository.
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
yum install haproxy
Step 2: The Configuration That Matters
The default config is useless for high-traffic web serving. Open /etc/haproxy/haproxy.cfg. We need to configure it for Layer 7 (HTTP) processing so we can inspect headers if needed.
Here is a battle-tested configuration block:
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http_front
bind *:80
default_backend web_cluster
backend web_cluster
balance roundrobin
option httpclose
option forwardfor
cookie SERVERID insert indirect nocache
server web01 10.0.0.2:80 check cookie s1
server web02 10.0.0.3:80 check cookie s2
Breaking Down the Magic
- balance roundrobin: Distributes requests sequentially. Request 1 goes to Web01, Request 2 to Web02. Simple and effective.
- option forwardfor: This is critical. Without this, your web servers only see the IP of the load balancer. This flag passes the real client IP in the
X-Forwarded-Forheader. - cookie SERVERID: Enables session stickiness. If a user logs into your shop on Web01, we want them to stay on Web01 so they don't lose their cart.
Hardware Constraints: The HDD Bottleneck
Even with a perfect load balancer, your database will eventually become the bottleneck. In 2011, disk I/O is the single biggest performance killer. Most providers cram you onto overloaded SATA spinning disks.
For high-load environments, we recommend seeking out hosting that offers SSD storage or high-RPM SAS arrays. The difference in random read/write speeds is night and day. If your database can't read the session table fast enough, the load balancer will just serve 503 errors faster.
Data Sovereignty and The "Datatilsynet" Factor
If you are hosting data for Norwegian citizens, you are dealing with the Personopplysningsloven (Personal Data Act). Hosting outside the EEA or in the US (under the Patriot Act) introduces legal headaches regarding data privacy.
By keeping your infrastructure local—like utilizing a VPS Norway based solution—you simplify compliance with the Data Inspectorate (Datatilsynet). Plus, the latency to the Norwegian Internet Exchange (NIX) in Oslo is unbeatable. Why route your local traffic through Frankfurt?
Testing the Setup
Start HAProxy:
service haproxy start
Now, tail the logs on both web servers. When you refresh your browser, you should see the requests toggling between Web01 and Web02. If one server goes down (simulate this by stopping Apache on Web01), HAProxy detects the failure via the check parameter and instantly redirects traffic to Web02.
The Verdict
Complexity is the price of stability. Moving from a single box to a load-balanced cluster requires more management, but the peace of mind is worth it. You eliminate the single point of failure and gain the ability to perform maintenance on one node while the other keeps serving traffic.
Ready to build a cluster that doesn't sleep? Deploy a high-performance instance on CoolVDS today. With our ddos protection and pure SSD options, your infrastructure will be ready for anything the internet throws at it.