The Multi-Cloud Lie: Why Hybrid Infrastructure Beats Pure Public Cloud for Nordic Ops in 2020
There is a dangerous myth circulating in DevOps Slack channels from Oslo to Trondheim: that "Multi-Cloud" means simply mirroring your stack across AWS and Azure. If you try this, you won't achieve redundancy; you will achieve bankruptcy. The complexity of managing IAM roles, disparate APIs, and the dreaded data egress fees will eat your budget alive before you serve a single request.
I learned this the hard way. In late 2018, I architected a "fully redundant" Kubernetes cluster spanning AWS eu-central-1 (Frankfurt) and a secondary provider. We thought we were clever until the first bill arrived. The inter-node traffic costs alone cost more than the compute instances. We weren't paying for performance; we were paying for data to travel across the continent.
In 2020, the smart money isn't on "All-in Public Cloud." It's on Hybrid VDS Architectures. Specifically for the Norwegian market, where latency to the NIX (Norwegian Internet Exchange) and strict GDPR data sovereignty are non-negotiable, you need a local anchor. Here is how to build a robust, cost-effective multi-cloud strategy using CoolVDS as your performance hub.
The Architecture: "Core and Burst"
The most pragmatic approach to multi-cloud right now is the "Core and Burst" model. You keep your stateful data (Databases, Redis, Object Storage) and steady-state compute on high-performance, fixed-cost local infrastructure (CoolVDS), and you only tap into the hyperscalers (AWS/GCP) for ephemeral, burstable workloads or specific proprietary APIs (like BigQuery or Lambda).
Why this works for Norway
- Latency: A packet from Oslo to AWS Frankfurt takes roughly 25-35ms round trip. A packet from Oslo to a CoolVDS datacenter in Oslo takes <3ms. For database queries, that accumulation of latency kills application performance.
- Data Sovereignty: With the uncertainty surrounding the EU-US Privacy Shield and the aggressive reach of the US CLOUD Act, keeping your primary user database physically within Norwegian borders is the safest legal play for compliance with Datatilsynet.
- Cost Predictability: Public clouds charge for IOPS. On a heavy MySQL workload, this variable cost is terrifying. CoolVDS offers unmetered NVMe I/O.
Step 1: Benchmarking the Core (The IOPS Reality Check)
Before you deploy, you must verify the metal. Many providers oversell "SSD" storage which turns out to be network-attached storage with noisy neighbors. We need raw, local NVMe.
Here is the exact fio command I use to audit new instances. If your "Premium Cloud" provider gives you less than 10k IOPS on this test, move on.
fio --name=random_write_test \
--ioengine=libaio --iodepth=64 --rw=randwrite --bs=4k \
--direct=1 --size=4G --numjobs=1 --runtime=60 \
--group_reporting
Pro Tip: On a standard CoolVDS NVMe instance, I consistently clock over 40k IOPS with fio. On a comparable AWS gp2 volume, you are capped by your volume size (3 IOPS per GB). To get 40k IOPS on AWS, you'd need a provisioned IOPS (io1) volume that costs hundreds of dollars a month just for the disk. Do the math.
Step 2: The Network Glue (Terraform + WireGuard/IPsec)
To make CoolVDS and AWS talk securely, we don't need expensive Direct Connect links. In 2020, modern processors are fast enough to handle encrypted tunnel traffic at line speed. While WireGuard is the rising star (keep an eye on the Linux 5.6 kernel merge), for production enterprise environments today, I still lean on StrongSwan (IPsec) for its battle-tested stability.
Here is a Terraform snippet (v0.12 syntax) to provision the security group on the cloud side to allow the VPN traffic from your CoolVDS static IP:
resource "aws_security_group" "vpn_gateway" {
name = "allow_ipsec_coolvds"
description = "Allow IPsec traffic from CoolVDS Oslo"
vpc_id = var.vpc_id
ingress {
from_port = 500
to_port = 500
protocol = "udp"
cidr_blocks = ["185.xxx.xxx.xxx/32"] # Your CoolVDS IP
}
ingress {
from_port = 4500
to_port = 4500
protocol = "udp"
cidr_blocks = ["185.xxx.xxx.xxx/32"]
}
}
On the CoolVDS side (Ubuntu 18.04 LTS), your /etc/ipsec.conf should look like this to ensure stable connections:
conn coolvds-to-aws
authby=secret
auto=start
left=%defaultroute
leftid=185.xxx.xxx.xxx
leftsubnet=10.10.0.0/24 # Local private subnet
right=52.xxx.xxx.xxx # AWS VPN Gateway IP
rightsubnet=172.31.0.0/16 # AWS VPC CIDR
ike=aes256-sha256-modp2048
esp=aes256-sha256-modp2048
keyexchange=ikev2
ikelifetime=28800s
Step 3: Intelligent Traffic Steering with Nginx
Don't just round-robin your traffic. That's lazy. Use Nginx to prioritize the local, low-latency CoolVDS instances and only spill over to the cloud when load is high. This keeps user experience snappy (low latency) and bills low.
We use the backup parameter in the upstream block. This is a simple yet powerful feature often overlooked.
http {
upstream backend_cluster {
# Primary: CoolVDS Local Instances (Low Latency, Fixed Cost)
server 10.10.0.5:8080 weight=5;
server 10.10.0.6:8080 weight=5;
# Secondary: AWS Instances (Higher Latency, Variable Cost)
# Marked as 'backup': only used when primaries are down or busy
server 172.31.10.5:8080 backup;
server 172.31.10.6:8080 backup;
}
server {
listen 80;
server_name api.example.no;
location / {
proxy_pass http://backend_cluster;
proxy_set_header X-Real-IP $remote_addr;
# Crucial for persistent connections across the VPN
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
The Database Dilemma: Replication Topology
Data gravity is real. You cannot split a monolithic SQL database across regions without suffering. My recommendation for 2020: Use MySQL 8.0 Group Replication or a simple Master-Slave setup where the Master lives on CoolVDS.
Why the Master on CoolVDS? Writes are expensive and sensitive to latency. You want your write-master close to your primary market (Norway). Read-replicas can be distributed to the cloud for analytics or backup purposes.
To optimize MySQL 8.0 for the high-speed NVMe drives provided by CoolVDS, you must tweak your InnoDB settings. Defaults are made for spinning rust.
[mysqld]
# Optimize for NVMe IOPS
innodb_io_capacity = 20000
innodb_io_capacity_max = 40000
innodb_flush_neighbors = 0
# Memory is cheap on CoolVDS, use it
innodb_buffer_pool_size = 8G
innodb_log_file_size = 1G
Conclusion: Own Your Core
Cloud neutrality is not about refusing to use AWS or Azure; it's about not being held hostage by them. By building your core infrastructure on CoolVDS, you gain the latency advantage in the Nordic market, you ensure GDPR compliance by default, and you stabilize your monthly burn rate.
The tools exist. Terraform handles the provisioning. StrongSwan handles the networking. And KVM-based virtualization ensures you aren't fighting for CPU cycles.
Don't let your infrastructure strategy be dictated by a salesperson's slide deck. Test the latency yourself. Deploy a CoolVDS instance today and run the fio test above. If the numbers don't beat your current cloud provider, I'll eat my terminal.