The "Cloud First" Hangover is Real
Three years ago, the directive was simple: "Move everything to the cloud." We migrated monoliths to microservices, database clusters to managed RDS instances, and accepted the monthly invoices as the cost of doing business. But in 2023, the hangover has set in. Between the unpredictable egress fees and the looming shadow of Schrems II, relying 100% on US-based hyperscalers isn't just expensive—it's a liability for Norwegian businesses.
I recently audited a chaotic infrastructure for a FinTech startup in Oslo. They were burning 40% of their monthly recurring revenue on AWS transfer costs. Why? Because they were serving heavy data payloads from Frankfurt to users in Trondheim, triggering massive egress charges. The latency was mediocre (25ms+), and the bill was astronomical.
We didn't delete their AWS account. We fixed the architecture. This is the guide on how to build a pragmatic multi-cloud setup that keeps the Datatilsynet happy and your CFO off your back.
The Compliance Elephant: Schrems II and Data Sovereignty
Let's be blunt. If you are processing personal data of Norwegian citizens, relying solely on standard clauses from US providers is risky. Since the Schrems II ruling, the legal ground has been shaky. The safest architectural pattern is Data Residency Separation.
Pro Tip: Keep your encrypted backups and sensitive PII (Personally Identifiable Information) on strictly European-owned infrastructure under Norwegian jurisdiction, while using hyperscalers for stateless compute loads.
This is where a provider like CoolVDS fits the architectural puzzle. By placing your primary database or cold storage on a CoolVDS NVMe instance in Oslo, you ensure data sovereignty. You then treat AWS or Azure as a "burst" layer for heavy computation, sending only anonymized datasets across the wire.
The Connectivity Layer: Bridging the Gap
The biggest challenge in multi-cloud is networking. You cannot rely on public internet routing for database replication between your CoolVDS instance and an AWS EC2 instance. It is insecure and latency-prone. We need a mesh.
In 2023, WireGuard is the de facto standard for this. It is faster than IPsec and easier to configure than OpenVPN. Here is how we set up a secure tunnel between a CoolVDS server (acting as the Data Hub) and an external cloud node.
1. The Hub Config (CoolVDS - Oslo)
First, install WireGuard. On our KVM-based infrastructure, the kernel support is native.
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey =
[Peer]
# The Hyperscaler Node
PublicKey =
AllowedIPs = 10.100.0.2/32 2. The Spoke Config (Hyperscaler - Frankfurt)
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.2/24
PrivateKey =
[Peer]
PublicKey =
Endpoint = 185.x.x.x:51820 # Your CoolVDS Static IP
AllowedIPs = 10.100.0.0/24
PersistentKeepalive = 25 With this setup, your application in Frankfurt talks to the database at `10.100.0.1`. The traffic is encrypted, and because CoolVDS peers directly at NIX (Norwegian Internet Exchange), the latency for your Norwegian users hitting the origin server is minimal.
Infrastructure as Code: Terraform State Strategy
Managing two clouds manually is a recipe for disaster. You need a unified control plane. Terraform allows us to define the "Hub and Spoke" model in a single repository.
The trick is using distinct providers. Do not try to abstract everything. Be explicit.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
# Generic provider for KVM/Libvirt or CloudInit
libvirt = {
source = "dmacvicar/libvirt"
version = "0.7.1"
}
}
}
provider "aws" {
region = "eu-central-1"
}
provider "libvirt" {
uri = "qemu+ssh://user@coolvds-host/system"
}
# Define the compute bursting layer
resource "aws_instance" "worker_node" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t3.medium"
# User data to install WireGuard and connect back to Oslo
user_data = file("scripts/join_mesh.sh")
}This configuration allows you to spin up 50 AWS nodes during Black Friday sales, all automatically connecting back to your persistent, cost-effective storage core on CoolVDS.
The Economics of Latency and IOPS
Why bother with the local node? Why not just stay in `eu-central-1`? IOPS per Dollar.
Hyperscalers throttle disk I/O aggressively unless you pay for "Provisioned IOPS" (io1/io2 volumes). A standard gp3 volume is fine for logs, but if you are running a high-transaction PostgreSQL database, you will hit the ceiling.
At CoolVDS, we don't play the throttling game. Our NVMe storage is local to the hypervisor. We see read speeds normally exceeding 3000 MB/s on standard instances. To get that throughput on AWS, you would be paying hundreds of dollars a month just for the EBS volume.
| Metric | Hyperscaler (gp3) | CoolVDS (Local NVMe) |
|---|---|---|
| Max Throughput | 125 - 1000 MiB/s | 3000+ MiB/s |
| Latency (Oslo User) | 25-35 ms | 2-5 ms |
| Egress Cost | $0.09/GB | Included (TB allowances) |
| Data Jurisdiction | US Company (Frankfurt) | Norwegian Company (Oslo) |
Orchestrating the Traffic
For the front end, use a geo-DNS strategy or a smart load balancer. If the request originates from Norway, route it to the CoolVDS ingress. If it is global traffic, send it to the CDN or the hyperscaler edge.
Here is a snippet of an Nginx configuration optimized for this split-routing, using the GeoIP module (assuming you have the databases installed):
http {
geoip2 /etc/maxmind/GeoLite2-Country.mmdb {
$geoip2_data_country_iso_code country iso_code;
}
map $geoip2_data_country_iso_code $backend_cluster {
default http://aws_cluster;
NO http://coolvds_local;
}
upstream aws_cluster {
server 10.100.0.2:80;
server 10.100.0.3:80;
}
upstream coolvds_local {
server 127.0.0.1:8080;
}
server {
listen 80;
location / {
proxy_pass $backend_cluster;
proxy_set_header X-Real-IP $remote_addr;
}
}
}This logic ensures that your Norwegian customers get the lowest possible latency by staying within the national infrastructure, while international traffic is offloaded.
Conclusion: Be Smart, Not Just "Cloud Native"
Being "Cloud Native" doesn't mean you have to rent everything from Amazon or Google. It means using the right tool for the job. For infinite scalability, use them. For high-performance I/O, data sovereignty, and predictable billing, use local infrastructure.
We built CoolVDS to be the reliable, high-speed anchor in this hybrid architecture. We provide the raw NVMe power and the bandwidth without the hidden fees, so you can build systems that are robust, compliant, and fast.
Don't let latency or legal fears paralyze your stack. Deploy a high-performance NVMe instance in Oslo today and start owning your data again.