The Pragmatic CTO’s Guide to Multi-Cloud in 2025: Compliance, Cost, and Reality
Let’s be honest. For 90% of businesses operating in the Nordics, "Multi-Cloud" is just a fancy term for "paying triple for egress bandwidth." I’ve sat in boardrooms in Oslo where consultants pitched elaborate active-active setups across AWS, Azure, and GCP, promising 100% uptime. The reality? They delivered a fragmented architecture that cost a fortune and broke twice as often because of complexity.
But there is a valid use case for multi-cloud. It’s not about redundancy; it’s about leverage and sovereignty. In 2025, with the Datatilsynet (Norwegian Data Protection Authority) tightening the screws on GDPR enforcement and the continuous legal limbo of US-EU data transfers, where you store your data matters more than where you compute it.
This is the "Sovereign Core" strategy. Keep your persistent data and core logic on high-performance, predictable infrastructure within Norwegian borders (like CoolVDS), and use the hyperscalers only for what they are good at: ephemeral burst compute, specific AI models, or global CDN edges.
The Architecture: The Sovereign Core
In a recent project for a fintech client based in Bergen, we faced a dilemma. They needed the AI capabilities of AWS SageMaker but legally couldn't store customer financial records on US-controlled volumes due to strict interpretation of Schrems II regulations. The solution wasn't to leave AWS, but to hollow it out.
We deployed the database and core API on CoolVDS NVMe instances in Oslo. This ensured:
- Legal Safety: Data rests on Norwegian soil under Norwegian law.
- Cost Control: No hidden IOPS charges. Database-heavy workloads on hyperscalers can bankrupt you.
- Latency: <2ms latency to NIX (Norwegian Internet Exchange).
We then connected this core to AWS Frankfurt via a mesh VPN. The AWS instances treated the CoolVDS database as a local resource. Yes, there is a latency penalty (approx 25-30ms), but for batch processing and async AI tasks, it is negligible.
The Glue: WireGuard Mesh
Forget IPSec. It’s 2025. If you aren't using WireGuard, you are wasting CPU cycles. We use a hub-and-spoke model where the CoolVDS instance acts as the stable hub. Unlike dynamic cloud IPs that change, your CoolVDS static IP is the anchor.
Here is how we set up a secure, encrypted tunnel between an AWS ephemeral worker and the CoolVDS Core Database. This config ensures that only internal traffic flows through the tunnel.
Step 1: The CoolVDS Hub Configuration (`/etc/wireguard/wg0.conf`)
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [SERVER_PRIVATE_KEY]
# Peer: AWS Worker Node
[Peer]
PublicKey = [AWS_WORKER_PUBLIC_KEY]
AllowedIPs = 10.100.0.2/32
On the client side (AWS), we keep it lean. We don't want to route all traffic through Norway—just the database calls. This split-tunneling saves massive egress fees.
Step 2: The AWS Worker Configuration
[Interface]
Address = 10.100.0.2/32
PrivateKey = [WORKER_PRIVATE_KEY]
DNS = 10.100.0.1
[Peer]
PublicKey = [SERVER_PUBLIC_KEY]
Endpoint = 185.x.x.x:51820 # Your CoolVDS Static IP
AllowedIPs = 10.100.0.0/24 # Only route internal subnet traffic
PersistentKeepalive = 25
Pro Tip: Always set `PersistentKeepalive = 25` when one peer is behind a NAT (like AWS EC2). This prevents the stateful firewall from dropping the UDP mapping during idle periods.
Orchestrating Hybrid Infrastructure with Terraform
Managing two providers manually is a recipe for disaster. We use Terraform to define the state of the world. In 2025, the Terraform provider ecosystem is mature enough to handle bare-metal VDS and Cloud instances in a single `main.tf`.
While CoolVDS provides a robust API, we often use a generic `remote-exec` provisioner for the initial bootstrapping of the VDS if a native provider isn't being utilized for a specific feature set. Here is a simplified example of how we spin up the AWS compute layer only after the CoolVDS data layer is confirmed healthy.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
null = {
source = "hashicorp/null"
version = "~> 3.0"
}
}
}
# 1. Define the CoolVDS Data Anchor (represented here as an external data source or managed resource)
resource "null_resource" "coolvds_anchor" {
# In a real scenario, use the CoolVDS Terraform provider
connection {
type = "ssh"
user = "root"
host = "185.x.x.x"
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"systemctl status mysql",
"wg show"
]
}
}
# 2. Deploy AWS Worker only if the Anchor is reachable
resource "aws_instance" "gpu_worker" {
ami = "ami-0c55b159cbfafe1f0" # Example AL2023
instance_type = "g5.xlarge"
depends_on = [null_resource.coolvds_anchor]
tags = {
Name = "AI-Worker-Node"
}
user_data = <<-EOF
#!/bin/bash
yum install -y wireguard-tools
echo "${var.wireguard_conf}" > /etc/wireguard/wg0.conf
wg-quick up wg0
EOF
}
The Latency Reality Check
Don't believe the marketing fluff about "instant" global replication. Physics is the law. When building this strategy, you must measure the RTT (Round Trip Time). If your application chats back and forth with the database 100 times to render one page, a 30ms latency becomes a 3-second load time.
We mitigate this by using read-replicas or caching layers (Redis) on the AWS side, while writes go to the CoolVDS master in Norway.
Use these quick commands to diagnose your topology:
ping -c 4 10.100.0.1 — Check basic connectivity through the tunnel.
nc -zv 10.100.0.1 3306 — Verify the MySQL port is actually reachable over VPN.
curl -w "Connect: %{time_connect} TTFB: %{time_starttransfer}\n" -o /dev/null -s https://api.yourdomain.no — Measure the real user impact.
Data Sovereignty & The Cost of Egress
Why not just host everything on AWS Stockholm? Egress fees.
If you are serving media, backups, or large datasets to users, hyperscalers charge roughly $0.09 per GB. If you push 10TB a month, that’s $900 just for traffic. CoolVDS offers significantly more generous bandwidth allocations included in the base price. By serving the heavy static content directly from CoolVDS NVMe storage and only using AWS for the compute logic, we reduced one client's monthly bill from $4,200 to $850.
Load Balancing the Traffic
To make this seamless to the user, we place an HAProxy entry point. It directs traffic based on the request type. This is vital for maintaining high SEO scores; Google penalizes slow TTFB (Time to First Byte).
global
log /dev/log local0
maxconn 2000
user haproxy
group haproxy
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
acl is_api path_beg /api/v2/ai_process
use_backend aws_cluster if is_api
default_backend coolvds_local
backend coolvds_local
server local_node 127.0.0.1:8080 check
backend aws_cluster
balance roundrobin
# Traffic goes over the WireGuard tunnel IP
server aws_node_1 10.100.0.2:80 check inter 2000 rise 2 fall 3
This configuration keeps 95% of traffic (static assets, standard web views) on the cost-effective CoolVDS infrastructure, while only routing the expensive API calls to the cloud.
Conclusion: Own Your Foundation
True professional DevOps isn't about using the most tools; it's about using the right tool for the job. In the Norwegian market, where data privacy is paramount and costs in the US-clouds are spiraling, a hybrid approach is the only logical path.
By anchoring your infrastructure on CoolVDS, you ensure compliance and cost stability. You treat the public cloud as a utility, not a landlord. That is how you survive in 2025.
Next Step: Test the latency for yourself. Spin up a CoolVDS instance and run a WireGuard benchmark against your current provider. The results usually speak for themselves.