Escaping the AWS Trap: Building a Bulletproof Hybrid Infrastructure in Norway
Let’s be honest for a second. The "Cloud" marketing machine is out of control. If I hear one more CIO talk about how putting everything on Amazon EC2 will magically solve their uptime problems, I might just `rm -rf /` on their production server. We all remember the Easter 2011 outage. We saw the EBS failures in Northern Virginia. The cloud isn't magic; it's just someone else's computer, and usually, it's a computer oversubscribed by 300%.
For those of us managing critical infrastructure in Norway—whether it's for banking, oil services, or media—latency and sovereignty matter. You can't afford a 140ms round-trip to US-East-1 when your users are sitting in Oslo, waiting for a database query to return. And you certainly can't ignore the Personopplysningsloven (Personal Data Act) by blindly dumping citizen data onto servers where the Patriot Act applies.
The solution isn't to abandon the cloud, but to stop treating it like a religion. The solution is a Hybrid Strategy: solid, bare-metal-performance Virtual Dedicated Servers (VDS) for your core I/O heavy workloads, and public cloud for burstable scalability.
The Architecture: Iron + Elasticity
In a recent project for a Norwegian e-commerce platform, we faced a classic dilemma: predictable high traffic during the day, massive spikes during sales, and a strict requirement to keep customer data within national borders. The "All-Cloud" approach was too slow (disk I/O on standard cloud instances is abysmal unless you pay a premium) and legally gray.
We built a hybrid topology:
- Core (The Brain): Two high-performance CoolVDS KVM instances located in Oslo. These host the Master Database (MySQL) and the primary application logic. Why? Because raw disk I/O on a KVM slice backed by RAID-10 SSDs smokes virtualized network storage every day of the week.
- Scale (The Muscle): A fleet of small cloud instances (using an API-driven provider) that spin up only when load averages hit 4.0.
- The Gatekeeper: HAProxy sitting in front of everything.
1. The Load Balancer: HAProxy is King
Forget complex hardware load balancers. HAProxy 1.4 is robust, free, and handles thousands of concurrent connections on a single core. We use it to route traffic primarily to our high-speed CoolVDS instances and bleed over to the cloud only when necessary.
Here is the battle-tested configuration we used to prioritize the local hardware:
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
acl is_static path_end .jpg .gif .png .css .js
use_backend static_cluster if is_static
default_backend app_cluster
backend app_cluster
mode http
balance roundrobin
option httpchk HEAD /health_check.php HTTP/1.0
# Primary Node: CoolVDS Oslo (Weight 100 ensures it takes most traffic)
server core-oslo-01 10.0.0.5:80 weight 100 check inter 2000 rise 2 fall 3
# Backup Cloud Nodes (Weight 10 means they only get traffic when Core is busy)
server cloud-backup-01 192.168.1.10:80 weight 10 check backup
By using the `weight` parameter, we ensure that 90% of the requests hit the metal where we have guaranteed I/O performance. The cloud nodes sit idle (and cheap) until the primary node is saturated.
2. Database Performance: The I/O Bottleneck
Public cloud providers often throttle IOPS. If you are running a Magento or heavy WordPress site, your database is your bottleneck. This is where the "Noisy Neighbor" effect kills you. If the VM next to you decides to compile a kernel, your checkout page times out.
We solve this by keeping the MySQL Master on a dedicated-resource VDS. We use KVM (Kernel-based Virtual Machine) technology, which CoolVDS standardizes on. Unlike OpenVZ, KVM provides true hardware virtualization. If I am allocated 4GB of RAM, it is my RAM. It cannot be oversold.
Pro Tip: On your MySQL server, ensure your `innodb_buffer_pool_size` is set to 70-80% of your available RAM. If you are running on SSDs (which you should be in 2013), change the flush method to avoid double buffering.
Add this to your /etc/mysql/my.cnf:
[mysqld]
# Optimization for SSD Storage
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000
innodb_read_io_threads = 8
innodb_write_io_threads = 8
# Replication Security
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = mixed
We then replicate data asynchronously to a slave node in a different datacenter (e.g., Amsterdam or London) for disaster recovery. But the writes always happen locally in Oslo. This keeps the Norwegian Data Inspectorate (Datatilsynet) happy because the primary data residence is clearly defined.
3. Keeping Files in Sync: Lsyncd
The biggest pain in hybrid setups is shared storage. NFS is a single point of failure and slow over WAN. GlusterFS is great but complex to maintain for smaller teams. In 2013, the pragmatic choice for web content is lsyncd (Live Syncing Daemon). It watches your local directory for changes and triggers rsync instantly.
Install it on your master node:
apt-get install lsyncd
Configure /etc/lsyncd/lsyncd.conf.lua to mirror your web root to the backup node:
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"
}
sync {
default.rsync,
source = "/var/www/html/",
target = "root@192.168.1.10:/var/www/html/",
rsync = {
compress = true,
archive = true,
verbose = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
}
}
This provides near real-time replication. If your Oslo server goes dark (power failure, network cut), your load balancer fails over to the cloud node, which already has the latest static files and a replicated database ready to be promoted.
The Verdict: Latency Wins
We ran a `ping` test from a standard Telenor fiber connection in Oslo.
AWS (Dublin): 35-45ms.
CoolVDS (Oslo): 2-5ms.
In the world of high-frequency trading or just impatient e-commerce shoppers, 40ms is an eternity. By anchoring your infrastructure on high-speed, local KVM instances and using the cloud only for what it's good at (overflow traffic), you lower your TCO and increase your stability.
Don't build your house on rented land alone. Own your core. If you need a rig that gives you direct access to SSD RAID-10 without the virtualization tax, check the benchmarks.
Ready to stop guessing about your I/O performance? Deploy a KVM instance on CoolVDS in Oslo today and see what single-digit latency feels like.