The Apache Death Spiral
It starts with a slow page load. Then the connection timeouts begin. Finally, you check top and see load averages climbing past 20.0. Your single Apache server, likely running the memory-hungry prefork MPM to support mod_php, has hit MaxClients. Every new visitor is stuck in a queue, eating up RAM until the OOM killer steps in to murder your MySQL process.
If you are running a serious setup in 2009, relying on a single box is negligence. You don't need expensive F5 BigIP hardware. You need HAProxy.
Why Nginx Isn't Enough
Many of you are switching to Nginx for static files. That's smart. But for pure Layer 4/7 TCP proxying, HAProxy (High Availability Proxy) is the surgeon's scalpel. It strips connection overhead before it ever hits your heavy application servers. It handles tens of thousands of concurrent connections without eating your CPU.
But here is the catch: HAProxy is only as good as the network stack it runs on.
Configuration: The Round Robin Defense
Let's assume you have two backend web servers (10.0.0.2 and 10.0.0.3). Here is a battle-tested configuration for HAProxy 1.3. This goes in /etc/haproxy/haproxy.cfg on your CoolVDS load balancer node.
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen webfarm 0.0.0.0:80
mode http
stats enable
stats auth admin:password
balance roundrobin
option httpclose
option forwardfor
server web1 10.0.0.2:80 check inter 2000 rise 2 fall 5
server web2 10.0.0.3:80 check inter 2000 rise 2 fall 5
Pro Tip: The option httpclose is vital here. It forces HAProxy to close the connection to the client after the response, preventing idle keep-alive connections from saturating your backend Apache slots.
The "Noisy Neighbor" Problem in Virtualization
Load balancing is network-interrupt heavy. If you try to run this on a budget container (like OpenVZ), you are sharing the kernel's network stack with every other customer on that physical host. If their site gets DDoS'd, your packet processing latency spikes. You can't tune /proc/sys/net/ipv4/tcp_tw_reuse because you don't own the kernel.
This is why we built CoolVDS on Xen hypervisors. When you provision a CoolVDS instance, you get a dedicated kernel and reserved RAM. Your TCP stack is yours alone. We don't oversell resources. If you allocate 512MB RAM, it is physically locked to your VM.
Latency: Oslo vs. The World
Physics hasn't changed. Distance equals latency. If your target market is Norway, hosting in Texas or even Germany adds 30-150ms of round-trip time (RTT).
For a dynamic PHP application doing multiple database calls, that latency compounds. Hosting at our facility connected to NIX (Norwegian Internet Exchange) ensures your packets stay local. We see pings as low as 2-4ms within the Oslo area. This makes your application feel instantaneous, regardless of how heavy your backend code is.
Data Sovereignty (Personopplysningsloven)
Legal compliance is becoming a headache for CTOs. Under the Norwegian Personal Data Act (Personopplysningsloven 2000) and EU Directive 95/46/EC, you are responsible for where your user data lives. Storing sensitive customer data on US servers (subject to the US Patriot Act) is a risk many Norwegian companies can no longer take.
Keep your data on Norwegian soil. It simplifies your legal standing and keeps the Datatilsynet (Data Inspectorate) happy.
Next Steps
Stop waiting for the server crash that wakes you up at 3 AM. Split your database and web heads, and put an HAProxy load balancer in front.
Need a testbed? Deploy a Xen-based CoolVDS instance with pure RAID-10 storage today. It takes less than 2 minutes to get root access.