The Myth of the Single Cloud Provider
It is becoming increasingly dangerous to put all your eggs in one basket. Whether you are relying solely on Amazon EC2 or a legacy dedicated server provider, you are exposing your infrastructure to a single point of failure—not just technical, but legal and financial. As a CTO, I look at the Total Cost of Ownership (TCO) and risk mitigation. In 2013, with the US Patriot Act casting long shadows over data sovereignty, moving sensitive Norwegian customer data entirely to US-owned infrastructure is a compliance nightmare waiting to happen.
We need a middle ground. A strategy that leverages the elasticity of massive public clouds for burst traffic while keeping critical data and core processing on high-performance, legally compliant hardware right here in Norway. This is the Hybrid Cloud approach.
The Latency Penalty: Oslo vs. Virginia
Physics is stubborn. Round-trip time (RTT) from Oslo to AWS US-East averages 90-110ms. For a static site, maybe that is acceptable. For a high-transaction Magento store or a financial application? It is sluggish. By anchoring your primary application logic on a local provider like CoolVDS, you drop that latency to under 10ms via the Norwegian Internet Exchange (NIX). The difference in database lock times and page load speed is palpable.
Architecture: The "Anchor and Burst" Model
Here is the setup we deployed for a recent media client facing traffic spikes during the 2013 election coverage:
- Primary (The Anchor): Two CoolVDS KVM instances (CentOS 6.3) located in Oslo. These handle writes and core processing.
- Secondary (The Burst): Commodity cloud instances used only when load exceeds capacity or for static asset offloading.
- Traffic Director: HAProxy acting as the gatekeeper.
1. The Traffic Director (HAProxy)
We use HAProxy 1.4 to route traffic. The configuration prioritizes the local hardware because it performs better (dedicated resources vs. noisy cloud neighbors). We use the `backup` directive to only utilize the remote cloud nodes when the local nodes are overwhelmed or down.
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
mode http
log global
option httplog
option dontlognull
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http-in
bind *:80
default_backend app_pool
backend app_pool
balance roundrobin
option httpchk GET /health_check.php
# CoolVDS Local Instances (Primary - High Weight)
server local_node_1 10.10.0.5:80 check weight 100
server local_node_2 10.10.0.6:80 check weight 100
# Remote Cloud Instances (Backup - utilized only on failure or extreme load)
server remote_cloud_1 203.0.113.10:80 check backup
2. Data Sovereignty and Database Replication
Datatilsynet (The Norwegian Data Protection Authority) is very clear about where personal data lives. Under the Personal Data Act, you are responsible for your users' privacy. To satisfy this, we keep the Master Database on CoolVDS in Norway. This ensures writes happen legally within the jurisdiction. We then replicate to the cloud for read-scaling or disaster recovery, often stripping sensitive fields if necessary before replication.
We use standard MySQL 5.5 Master-Slave replication. Do not rely on simplistic cloud database services if you need granular control over the binary log formats.
Master Configuration (my.cnf on CoolVDS):
[mysqld]
server-id = 1
log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db = production_db
# Ensure durability on SSDs
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
innodb_buffer_pool_size = 4G
Slave Configuration (Remote Cloud):
[mysqld]
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
read_only = 1
Pro Tip: When setting up replication across the public internet, always wrap the connection in an SSH tunnel or use SSL replication. Never expose port 3306 open to the world. We use `autossh` to keep a persistent tunnel between the CoolVDS node and the external cloud.
3. The Storage Dilemma: SSD vs. Spindle
In 2013, many providers are still pushing SAS 15k RPM drives as "high performance." They are not. If your database is I/O bound, mechanical drives are the bottleneck. We benchmarked a standard 15k SAS setup against the SSD storage arrays used by CoolVDS. The difference in IOPS (Input/Output Operations Per Second) is approximately 100x.
| Storage Type | Random Read IOPS | Write Latency |
|---|---|---|
| 7.2k SATA | ~80 | High (>10ms) |
| 15k SAS | ~180 | Moderate (~5ms) |
| CoolVDS SSD | ~20,000+ | Instant (<0.1ms) |
Automating the Failover
Manual intervention is slow. We use a simple shell script combined with Cron and `curl` to check the status of our local nodes. If the local anchor goes dark, we can use the DNS API to swing the A-records, though HAProxy handles this faster at the TCP level.
Here is a snippet of a check script we deploy via Puppet:
#!/bin/bash
# Simple health check aimed at local node
HTTP_STATUS=$(curl -o /dev/null --silent --head --write-out '%{http_code}\n' http://10.10.0.5/health_check.php)
if [ "$HTTP_STATUS" != "200" ]; then
echo "Local Node Down! Triggering alert to SysAdmin..."
# In a real scenario, this would trigger an SMS gateway API
logger -p local0.crit "CRITICAL: Local anchor node is unresponsive."
fi
Why KVM Matters for Hybrid Setups
Many budget hosts use OpenVZ. With OpenVZ, you share the kernel. If a neighbor gets DDoS'd, your kernel tables fill up, and your site dies. For a hybrid architecture to be stable, the local node must act like a dedicated server. CoolVDS uses KVM (Kernel-based Virtual Machine), which provides true hardware virtualization. You can load your own kernel modules, set your own `sysctl` parameters, and guarantee that your allocated RAM is actually yours.
Conclusion
You don't have to choose between the flexibility of the cloud and the performance of local metal. By placing your database and primary application logic on CoolVDS in Norway, you satisfy Datatilsynet requirements and achieve sub-millisecond latency for local users. Use the big US clouds for what they are good at: storing petabytes of cat pictures on S3 and absorbing massive traffic spikes.
Ready to anchor your infrastructure? Stop fighting I/O wait times. Deploy a KVM SSD instance on CoolVDS today and see what legitimate hardware speed feels like.