The Multi-Cloud Myth: A CTO’s Guide to Hybrid Infrastructure in 2019
It is January 2019. If you attended any tech conference in Oslo or Stockholm last year, you likely heard the phrase "Multi-Cloud" repeated until it lost all meaning. The promise is seductive: infinite redundancy, zero vendor lock-in, and the ability to arbitrage costs between AWS, Azure, and Google Cloud.
Here is the uncomfortable reality: For most Nordic enterprises, a pure multi-hyperscaler strategy is a financial trap. It multiplies your complexity, creates data gravity issues, and destroys your margins with egress fees.
As a CTO focusing on the Norwegian market, I value two things: Total Cost of Ownership (TCO) and Data Sovereignty. The introduction of GDPR last year changed the game. The US CLOUD Act passed in 2018 has further complicated using American providers for sensitive Norwegian data. The pragmatic move in 2019 isn't "All-in on AWS"—it's a hybrid Core-and-Edge architecture.
The "Core & Burst" Architecture
The most cost-efficient infrastructure model available right now involves placing your steady-state workloads on predictable, high-performance bare-metal or KVM-based VPS, and using public clouds strictly for bursting or specific proprietary APIs (like BigQuery or Lambda).
Pro Tip: Do not underestimate the latency penalty. Round-trip time (RTT) from Oslo to AWS Frankfurt is typically 25-35ms. From a user in Oslo to a CoolVDS instance in Oslo? It is often under 3ms. For database transactions, that physics adds up to visible user friction.
Step 1: The Base Layer (Data Sovereignty)
Your database and core application logic should reside where your legal liability is lowest and your I/O performance is highest per krone. In Norway, that means a local datacenter. We use CoolVDS instances for this layer because they offer pure KVM virtualization. Unlike OpenVZ containers (which suffer from noisy neighbors), KVM gives us guaranteed resource isolation—critical for meeting SLA requirements.
For a typical Magento or heavy-duty SaaS stack, we need high IOPS without the "Provisioned IOPS" tax Amazon charges. Here is a standard fio benchmark we run on a CoolVDS NVMe instance to verify storage integrity:
fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --numjobs=1 --size=4g --iodepth=128 --runtime=60 --time_based --end_fsync=1
On standard cloud block storage, you might see 3,000 IOPS before you hit a paywall. On local NVMe storage, we routinely see numbers significantly higher, ensuring the database isn't the bottleneck.
Step 2: Connecting the Clouds (Terraform v0.11 Implementation)
To make this work, we treat the different providers as a single mesh. We use Terraform to provision resources. Since Terraform 0.12 is still in beta/alpha as of this writing, we will stick to the stable 0.11 syntax.
We need a secure tunnel between our CoolVDS Core in Oslo and an AWS VPC in Frankfurt. While WireGuard is promising, it's not yet audit-ready for enterprise production. We will use StrongSwan (IPsec) for a rock-solid site-to-site VPN.
Terraform Config for the Gateway
resource "aws_instance" "vpn_gateway" {
ami = "ami-0ac05733838eabc06" # Ubuntu 18.04 LTS
instance_type = "t2.micro"
tags {
Name = "prod-vpn-gateway"
}
# 0.11 syntax requires variables to be interpolated
subnet_id = "${aws_subnet.main.id}"
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y strongswan"
]
}
}
Configuring the Tunnel (ipsec.conf)
On your CoolVDS instance (The Core), the configuration ensures that traffic destined for the AWS subnet (e.g., 10.0.1.0/24) goes through the tunnel, while local traffic stays in Norway. This keeps your sensitive customer data on Norwegian soil, satisfying Datatilsynet requirements, while allowing the app to offload image processing to S3 or Lambda.
# /etc/ipsec.conf on CoolVDS Node
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn oslo-to-frankfurt
type=tunnel
auto=start
keyexchange=ikev2
authby=secret
# CoolVDS (Local)
left=%defaultroute
leftid=185.x.x.x
leftsubnet=192.168.10.0/24
# AWS (Remote)
right=52.x.x.x
rightsubnet=10.0.1.0/24
ike=aes256-sha1-modp1024!
esp=aes256-sha1!
Step 3: Database Optimization for Hybrid Latency
If you are splitting your app (web servers in cloud, DB on CoolVDS), you must tune for the latency mentioned earlier. However, the best design is often Active-Passive or using the cloud only for static assets and stateless computing.
If you must run queries across the tunnel, TCP optimization is mandatory. In your sysctl.conf, increase the window size to allow for higher throughput over the slightly higher latency link:
# /etc/sysctl.conf optimization for WAN links
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = cubic
Apply this with sysctl -p. Note that we are sticking to CUBIC congestion control; while BBR is gaining traction in recent Linux kernels, CUBIC remains the most predictable standard for mixed production workloads in 2019.
The Cost Breakdown: Why "Cloud-Native" is Expensive
Let’s look at the numbers. A "General Purpose" instance on a major cloud provider with 4 vCPUs and 16GB RAM can cost upwards of $150/month once you factor in bandwidth and storage. A comparable setup on CoolVDS is a fraction of that cost, and crucially, bandwidth is often more generous.
| Feature | Hyperscaler (Frankfurt) | CoolVDS (Oslo) |
|---|---|---|
| Latency to Oslo User | ~25ms | ~2ms |
| Storage Type | EBS (Network Attached) | Local NVMe (Direct) |
| Data Sovereignty | Subject to US Cloud Act | Norwegian Jurisdiction |
| Bandwidth Cost | High Egress Fees | Included / Low Cost |
Conclusion: Own Your Core
The "all-in" cloud migration trend is cooling off as bills heat up. The strategy for 2019 is not about abandoning the cloud, but about commoditizing it. Use the hyperscalers for what they are good at—global CDN distribution and managed AI services—but keep your state, your database, and your heavy compute on infrastructure you control.
By hosting your core on CoolVDS in Norway, you satisfy GDPR compliance, ensure the lowest possible latency for your local customers, and cut your infrastructure bill by 40-60%. That is capital you can reinvest in development rather than paying for idle silicon.
Ready to reclaim your infrastructure? Deploy a high-performance KVM instance on CoolVDS today and see the I/O difference for yourself.