The Single Point of Failure Nightmare
We have all been there. It is 3:00 AM on a Tuesday. Your monitoring scripts are screaming because Apache has locked up. The kernel is killing processes to free up memory (OOM killer), and your client is calling because their e-commerce site is dead. Why? Because you tried to serve 5,000 concurrent connections from a single box. It is time to stop scaling vertically and start scaling horizontally.
In the enterprise world, managers throw $20,000 at an F5 BigIP hardware appliance and call it a day. We don't have that luxury, and frankly, we don't need it. Enter HAProxy (High Availability Proxy). It is the exact open-source engine powering Reddit and Stack Overflow right now. If it can handle their traffic, it can handle your Norwegian web shop.
Architecture: The Reverse Proxy Pattern
The concept is simple but powerful. Instead of public traffic hitting your web server directly, it hits the HAProxy load balancer first. HAProxy then distributes these requests across a cluster of backend web servers (nodes).
This gives you two massive advantages:
- Resilience: If Web-Node-01 dies, HAProxy detects it instantly and reroutes traffic to Web-Node-02. No downtime.
- Performance: You can offload SSL termination (though stunnel is often needed with HAProxy 1.4) and keep your backend servers focused purely on generating PHP or serving static assets.
The Configuration: HAProxy 1.4
Let's get our hands dirty. Assuming you are running Debian 6 (Squeeze) or CentOS 5.6, install the package. Here is a battle-tested configuration for /etc/haproxy/haproxy.cfg that balances HTTP traffic between two backend servers.
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http_front
bind *:80
default_backend web_cluster
backend web_cluster
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
cookie SERVERID insert indirect nocache
server web01 10.0.0.2:80 cookie A check
server web02 10.0.0.3:80 cookie B checkBreaking Down the Config
The balance roundrobin directive is crucial here. It sends requests to servers in turns. If you have a stateful application (like a Magento cart or PHP sessions), notice the cookie SERVERID insert line. This injects a cookie so the user sticks to the same server for their session. Without this, your users will get logged out every time they refresh the page because their request jumps to a server that doesn't have their session data.
Pro Tip: Always set option httpchk. This forces HAProxy to actually request a page (or a dedicated status file) to verify the server is alive, rather than just checking if the TCP port is open. A stuck Apache process can still listen on port 80 while serving 500 errors.Latency and Local Compliance
Why host this in Norway? Latency and law. If your customers are in Oslo or Bergen, routing traffic through a budget provider in Texas or even Germany adds unnecessary milliseconds. In high-frequency trading or fast-paced e-commerce, 50ms makes a difference in conversion rates. You want your servers peering directly at the NIX (Norwegian Internet Exchange).
Furthermore, we have the Personal Data Act (Personopplysningsloven) to worry about. The Datatilsynet is becoming increasingly strict about where citizen data lives. Hosting on US-owned infrastructure puts you at the mercy of the Patriot Act. Keeping your data on Norwegian soil, on a platform like CoolVDS, simplifies your compliance strategy significantly.
The Infrastructure Beneath
Software load balancers are CPU efficient, but they demand stability. Many 'cheap' VPS providers oversell their nodes using OpenVZ, meaning your 'guaranteed' RAM isn't actually there when your neighbor decides to compile a kernel. For a load balancer, this jitter is unacceptable.
This is why we architect CoolVDS on strict virtualization tech like Xen. We also utilize high-performance SSD storage arrays (RAID-10). While spinning SAS disks are standard, the move to Solid State Drives drastically reduces I/O wait times during log writes and cache swaps. When you are pushing 2,000 requests per second, you cannot afford to wait for a drive head to seek.
Final Thoughts
Complexity is the enemy of uptime, but redundancy is its best friend. Start small. Deploy two web nodes and one HAProxy load balancer. Test the failover by physically unplugging the network cable (or running ifdown eth0) on one node. If your site stays up, you have done your job.
Ready to build a cluster that actually survives traffic spikes? Deploy a high-performance instance on CoolVDS today and get root access in under a minute.