Multi-Cloud isn't a Buzzword; It's an Insurance Policy
If you are still hosting 100% of your infrastructure on a single US-based hyperscaler, you are arguably negligent. That is a harsh start, but look at the reality in September 2024. Between the fluctuating interpretation of Schrems II by Datatilsynet and the creeping egress fees that hit your P&L every month, the "all-in on AWS" strategy is dead.
I recently audited a fintech startup in Oslo. They were burning 40,000 NOK monthly just on data transfer costs because they were serving high-res static assets from S3 to Norwegian customers via CloudFront. The latency was acceptable, but the bill was not. Worse, they had PII (Personally Identifiable Information) sitting in a bucket that technically replicated to a region outside the EEA during a maintenance window.
The solution wasn't to abandon the cloud. It was to adopt a pragmatic multi-cloud architecture. Keep the proprietary ML models on the hyperscaler. Move the data, the heavy IOPS, and the static assets to a sovereign, local provider like CoolVDS.
The Architecture: Hybrid Connectivity
The goal is to treat your infrastructure as a unified mesh. We don't want public internet traffic between your database (on a VPS) and your application logic (potentially in a Kubernetes cluster elsewhere). We need a secure, private tunnel.
In 2024, IPsec is too clunky for agile teams. We use WireGuard. It is built into the Linux kernel, it creates minimal overhead, and it re-connects instantly if a node roams.
Step 1: Infrastructure as Code (Terraform)
You manage this complexity with Terraform. Do not click around in portals. Here is how you define a hybrid state, provisioning a heavy compute instance on CoolVDS for your database while keeping a stateless frontend group elsewhere.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
# Assuming a generic OpenStack/KVM provider or custom provider for the VPS
coolvds = {
source = "terraform-provider/openstack"
version = "~> 1.53"
}
}
}
provider "aws" {
region = "eu-north-1" # Stockholm
}
provider "coolvds" {
auth_url = "https://api.coolvds.com/v3"
region = "oslo-1"
}
# The Sovereign Database Node
resource "coolvds_compute_instance_v2" "db_primary" {
name = "postgres-primary-01"
image_name = "Ubuntu 24.04"
flavor_name = "nvme.8c.32g" # 8 vCPU, 32GB RAM, NVMe
key_pair = "deploy-key"
security_groups = ["default", "db-secure"]
user_data = file("scripts/init-wireguard.sh")
}
This configuration ensures that your state file knows about both environments. You can reference the public IP of the db_primary to inject it into the security groups of your frontend application.
Step 2: Securing the Pipe
Latency matters. A round trip from Oslo to Stockholm (AWS) is roughly 12-15ms. From Oslo to Frankfurt, it can hit 30ms. By using CoolVDS in Oslo, you keep your data local. To connect the two securely, we install WireGuard.
Check your kernel version first:
uname -r
If you are on 5.10+, you are good. Here is a production-ready wg0.conf for the CoolVDS node (acting as the peer).
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey =
[Peer]
# The Hyperscaler Client
PublicKey =
AllowedIPs = 10.100.0.2/32
Endpoint = 52.x.x.x:51820
PersistentKeepalive = 25
Pro Tip: Always set PersistentKeepalive = 25 when dealing with AWS or Azure Security Groups. NAT timeouts can silently drop your UDP packets, killing the tunnel without updating the interface status.
The IOPS Trade-off: Why Local Matters
Hyperscalers charge a premium for Provisioned IOPS (io2/gp3). If you are running a high-transaction PostgreSQL database or an Elasticsearch cluster, these costs scale linearly and brutally.
CoolVDS utilizes local NVMe storage passed through via KVM. We don't throttle your IOPS to upsell you a "Turbo" tier. You get the raw speed of the drive.
Run this fio test on your current instance versus a CoolVDS instance. The random read/write performance is usually where the battle is won.
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=512M --numjobs=1 --runtime=240 --group_reporting
On a standard cloud volume, you might see 3,000 IOPS. On local NVMe, you should expect significantly higher numbers, often saturating the interface limit before the disk limit.
To verify disk latency in real-time during load:
iostat -x 1
Watch the await column. If it consistently exceeds 5ms, your current host is stealing CPU cycles or throttling your I/O.
Traffic Routing with HAProxy
A true multi-cloud setup needs an intelligent router. You can't rely on DNS round-robin alone; it doesn't respect health checks fast enough. We deploy HAProxy on a lightweight CoolVDS edge node to route traffic. If the Oslo node is healthy, serve locally (0ms egress). If it fails, failover to the cloud replica.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend main_ingress
bind *:80
bind *:443 ssl crt /etc/ssl/certs/site.pem
default_backend app_nodes
backend app_nodes
balance roundrobin
option httpchk GET /healthz
http-check expect status 200
# Primary Local Node (High Performance, Low Cost)
server local_vps 10.100.0.1:80 check weight 100
# Cloud Backup (High Cost, High Availability)
server cloud_vm 10.100.0.2:80 check weight 1 backup
This configuration prioritizes the local_vps. Traffic only flows to the cloud_vm if the local node fails health checks. This keeps your egress bill near zero during normal operations.
Verification
Once deployed, verify your routing. Use curl to check which server answered.
curl -I https://api.yourdomain.no/v1/status
Also, check your DNS propagation to ensure the HAProxy IP is authoritative.
dig +short api.yourdomain.no
Compliance is Not Optional
In Norway, the Datatilsynet is clear: you must know where your data lives. By anchoring your database on CoolVDS in Oslo, you simplify your GDPR narrative. Your data rests in Norway. It is processed in Norway. Backups remain in Norway (or encrypted off-site within the EEA).
This is not just about avoiding fines. It is a competitive advantage when selling B2B to Norwegian municipalities or healthcare providers who are allergic to the US CLOUD Act.
The Bottom Line
Multi-cloud isn't about complexity; it's about leverage. Use the hyperscalers for their specialized APIs, but don't let them hold your data hostage with egress fees and opaque storage pricing.
For the core of your infrastructure—the databases, the storage, and the heavy lifting—choose raw performance and sovereignty. CoolVDS offers the NVMe I/O performance and local presence required to make this hybrid architecture viable.
Stop renting CPU cycles from neighbors who are too noisy. Spin up a dedicated NVMe instance on CoolVDS today and test the latency yourself.