Multi-Cloud is a Trap (Unless You Do This): A Norwegian CTO’s Survival Guide
Let’s be honest. For most organizations, "Multi-Cloud" is just a polite term for "expensive, unmanageable fragmentation." I’ve sat in too many boardrooms in Oslo where the strategy is simply to buy more AWS credits while frantically trying to patch a leaky compliance bucket. The promise of infinite scalability usually hits the hard wall of egress fees and the complexity of managing IAM roles across three different vendors.
But here is the reality we face in September 2023: We don't have a choice. Between the fallout of Schrems II, the vigilant eye of Datatilsynet, and the volatile pricing of US-based hyperscalers, relying on a single vendor is negligence.
The solution isn't to mirror your infrastructure across AWS, Azure, and GCP. That’s madness. The solution is a Hybrid Core Strategy. You keep your heavy, stable, and sensitive workloads on high-performance local infrastructure (like CoolVDS), and you use the public cloud only for what it’s actually good at: ephemeral burst computing and global CDN distribution.
The Latency & Legal Equation
In Norway, physics and law are your primary constraints. If your users are in Oslo, Bergen, or Trondheim, routing traffic through Frankfurt (eu-central-1) or Stockholm introduces unnecessary latency. We are talking about 20-30ms round trip versus <3ms with local peering at NIX (Norwegian Internet Exchange).
More importantly, data sovereignty is no longer optional. If you store PII (Personally Identifiable Information), the US CLOUD Act creates a backdoor that GDPR explicitly conflicts with. By anchoring your database on a provider under Norwegian jurisdiction, you create a legal firewall. We architect this by placing the Stateful Layer (Database, NFS, Secrets) locally, and the Stateless Layer (Frontends, APIs) globally.
The Architecture: Hub-and-Spoke with WireGuard
Forget IPsec VPNs. They are bloated, slow to handshake, and a pain to debug. In 2023, WireGuard is the standard for secure, kernel-space meshing. It’s performant enough to saturate a 10Gbps link without melting your CPU.
Here is a real-world setup we deployed last month. We have a primary PostgreSQL cluster running on NVMe-backed CoolVDS instances in Oslo. We have auto-scaling stateless containers in AWS. They talk over a private encrypted mesh.
Step 1: The Hub Configuration (Local Node)
On your CoolVDS instance (Ubuntu 22.04 LTS), install WireGuard. This machine acts as the gateway to your secure data.
apt-get update && apt-get install wireguard
wg genkey | tee privatekey | wg pubkey > publickey
Configure /etc/wireguard/wg0.conf. Note the PersistentKeepalive setting—crucial for traversing NAT gateways in public clouds.
[Interface]
Address = 10.0.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [INSERT_LOCAL_PRIVATE_KEY]
# Peer: AWS Worker Node 1
[Peer]
PublicKey = [INSERT_AWS_PUBLIC_KEY]
AllowedIPs = 10.0.0.2/32
Step 2: The Spoke Configuration (Cloud Node)
On the ephemeral cloud instance, the config is simpler. We force traffic meant for the database (IP 10.0.0.1) through the tunnel.
[Interface]
Address = 10.0.0.2/24
PrivateKey = [INSERT_AWS_PRIVATE_KEY]
[Peer]
PublicKey = [INSERT_LOCAL_PUBLIC_KEY]
Endpoint = 185.x.x.x:51820 # Your CoolVDS Static IP
AllowedIPs = 10.0.0.0/24
PersistentKeepalive = 25
Pro Tip: Don't rely on the cloud provider's default firewall alone. Use ufw on your local node to strictly limit UDP port 51820 to known IP ranges if possible, or use a pre-shared key (PSK) in WireGuard for quantum-resistant security hardening.
Infrastructure as Code: Tying it Together
Managing this manually is a recipe for disaster. While you might use CloudFormation for AWS, a multi-cloud strategy demands Terraform. It allows us to define the local KVM resources and the cloud resources in a single state file.
Since CoolVDS provides standard Linux access, we use the Terraform remote-exec provisioner or a generic Libvirt provider to bootstrap the local environment. Here is a snippet illustrating how we provision the link between the two worlds:
resource "aws_instance" "app_server" {
ami = "ami-0c55b159cbfafe1f0" # Ubuntu 22.04
instance_type = "t3.micro"
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo apt-get install -y wireguard",
"echo '${local.wg_config}' | sudo tee /etc/wireguard/wg0.conf",
"sudo systemctl enable wg-quick@wg0",
"sudo systemctl start wg-quick@wg0"
]
}
}
The Economics of Egress
This is where the "Pragmatic" part of the title comes in. Hyperscalers operate on a "Roach Motel" model: data checks in for free, but you pay a premium to check it out. Egress fees can cost upwards of $0.09 per GB. If you are serving heavy media or running analytics, this kills your margins.
By hosting the primary dataset on CoolVDS, you benefit from significantly more generous bandwidth allocations typical of Nordic hosting standards. You only push necessary data out to the cloud spokes, rather than pulling massive datasets down from the cloud to your office or users.
| Cost Factor | Hyperscaler (AWS/GCP) | Local Core (CoolVDS) |
|---|---|---|
| Compute (vCPU) | Expensive (Shared/Burstable) | Predictable (Dedicated options) |
| Storage (NVMe) | Pay per IOPS + Capacity | Included High-IOPS NVMe |
| Egress Bandwidth | $0.08 - $0.12 / GB | Generous TB allowances |
| Data Sovereignty | Complex (US Cloud Act) | Native (Norwegian Law) |
Testing Latency: The Proof is in the Ping
Don't take latency claims at face value. Before finalizing any architecture, I run a simple mesh test. Here is a script I use to log jitter and latency over 24 hours to ensure the connection between the CoolVDS core and the cloud edge is stable enough for database transactions.
#!/bin/bash
# latency_mon.sh
TARGET="10.0.0.1" # Internal WireGuard IP of the Core
LOG_FILE="/var/log/mesh_latency.csv"
echo "Timestamp,Latency_ms" > $LOG_FILE
while true; do
LATENCY=$(ping -c 1 $TARGET | grep 'time=' | awk -F'time=' '{ print $2 }' | awk '{ print $1 }')
echo "$(date -u +"%Y-%m-%dT%H:%M:%SZ"),$LATENCY" >> $LOG_FILE
sleep 60
done
If you see jitter exceeding 5ms, check your MTU settings. A standard WireGuard MTU of 1420 usually avoids fragmentation, but on some cloud networks (like Azure), you may need to drop it to 1280.
Conclusion
A multi-cloud strategy isn't about collecting vendors like Pokémon cards. It's about leverage. You use the giants for their reach, but you keep your soul—and your data—grounded in infrastructure you can trust. For the Norwegian market, combining the elasticity of public cloud with the raw performance and compliance of a CoolVDS instance gives you the best TCO profile available in 2023.
Stop overpaying for latency. Deploy a control node on CoolVDS today, set up that WireGuard tunnel, and take back control of your infrastructure.