The "Cloud Agnostic" Myth vs. The Data Sovereignty Reality
It has been six months since the CJEU dropped the Schrems II ruling, and if you are a CTO operating in the EEA, you are likely still auditing your data flows. The comfortable illusion that we could simply dump all Norwegian citizen data into us-east-1 and hide behind Privacy Shield is gone. It is dead.
But legal compliance is only one head of the hydra. The other is the silent killer of modern OpEx: Egress Fees.
In this architecture breakdown, we are going to ignore the marketing fluff around "infinite scalability" and look at a pragmatic, battle-tested multi-cloud setup. We will build a Hybrid Core architecture where your data lives safely in Norway (on CoolVDS NVMe storage), and your compute bursts happen wherever they need to. We focus on latency minimization via NIX (Norwegian Internet Exchange) and cost control.
The Architecture: The "Norwegian Core" Pattern
The mistake most DevOps teams make is mirroring their entire stack across providers. That is not multi-cloud; that is multi-billing. Instead, we treat the VPS provider as the "Stateful Core" and the hyperscaler as the "Stateless Edge."
Why this split?
- Data Gravity: Databases require high I/O. Hyperscalers throttle IOPS unless you pay for "Provisioned IOPS" (io1/io2 volumes). CoolVDS provides raw NVMe access by default.
- Compliance: Storing user tables on a server physically located in Oslo satisfies Datatilsynet requirements more easily than explaining AWS KMS encryption keys managed in Virginia.
- Latency: If your customers are in Scandinavia, a roundtrip to Frankfurt (AWS eu-central-1) is ~15-20ms. A roundtrip to a local Oslo datacenter is <3ms.
Step 1: The Secure Mesh (WireGuard)
Forget IPsec. It is bloated, slow to handshake, and a pain to debug. Since Linux kernel 5.6 (released last year, 2020), WireGuard is in-tree. It is the only VPN technology you should be using for linking clouds in 2021.
Here is how we link a CoolVDS instance (The Core) with an external compute node. We assume you are running Ubuntu 20.04 LTS.
On the CoolVDS Node (Oslo):
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = ufw route allow in on wg0 out on eth0
PostUp = iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = <YOUR_SERVER_PRIVATE_KEY>
[Peer]
# The Stateless Edge Node
PublicKey = <EDGE_NODE_PUBLIC_KEY>
AllowedIPs = 10.100.0.2/32
Performance Note: We use a standard MTU of 1420 here to be safe with encapsulation overhead, but since CoolVDS supports Jumbo Frames on internal networks, you can tune this if you are pushing heavy backups between local nodes.
Step 2: Managing State with Terraform
Managing two providers manually is a recipe for drift. While CoolVDS handles the heavy lifting via KVM, we can use Terraform 0.14 to orchestrate the environment. The goal is to define the infrastructure where the state is permanent, but the compute is ephemeral.
Below is a stripped-down main.tf demonstrating how to provision the core infrastructure. Note that we rely on a remote-exec provisioner for the bare-metal VPS performance tuning, as standard cloud-init often misses kernel-level optimizations required for high-throughput NVMe.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
}
# The Stateless Front-End (AWS)
resource "aws_instance" "edge_node" {
ami = "ami-05f7491af5eef733a" # Ubuntu 20.04
instance_type = "t3.micro"
tags = {
Name = "Edge-Proxy-01"
}
}
# The Stateful Core (CoolVDS context)
# We use a null_resource to trigger configuration management
# on our persistent Norwegian node.
resource "null_resource" "coolvds_core_config" {
connection {
type = "ssh"
user = "root"
host = "185.xxx.xxx.xxx" # Your CoolVDS Static IP
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"sysctl -w net.ipv4.ip_forward=1",
"apt-get update && apt-get install -y wireguard mariadb-server",
# Optimize for NVMe I/O scheduler
"echo none > /sys/block/vda/queue/scheduler"
]
}
}
Pro Tip: Never rely on default Linux I/O schedulers for database servers. On CoolVDS NVMe instances, switching the scheduler tonoopornone(as shown above) can reduce CPU overhead by 15-20% during high write operations, as the physical NVMe controller handles the sorting better than the kernel.
Step 3: Database Tuning for Local Performance
Having the server in Norway is useless if the configuration bottlenecks the hardware. I often see my.cnf files that look like they were written for spinning rust HDDs from 2015.
When you have guaranteed resources—which is the main selling point of KVM virtualization on CoolVDS over shared containers—you can be aggressive with memory allocation. For a 16GB RAM instance running MySQL 8.0, your configuration should explicitly target the buffer pool:
[mysqld]
# Allocate 70-80% of RAM if dedicated to DB
innodb_buffer_pool_size = 12G
# Critical for NVMe SSDs
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_neighbors = 0
# Durability vs Performance trade-off (set to 2 only if you have replication)
innodb_flush_log_at_trx_commit = 1
Setting innodb_flush_neighbors = 0 is crucial. The old concept of flushing neighboring pages was designed for rotating platters to minimize head seek time. On NVMe storage, this is just wasted CPU cycles.
The Economic Argument: Bandwidth
Here is the math nobody puts in their pitch deck. AWS charges roughly $0.09 per GB for egress traffic. If you are serving media or large datasets to Norwegian users from Frankfurt, you are paying a "data tax."
By keeping the heavy assets on a VPS Norway instance with CoolVDS, you leverage generous bandwidth allowances (often unmetered or significantly higher caps). You use the hyperscaler only for lightweight compute or global CDN termination, fetching data from the Core only when necessary.
Benchmarking the Link
Don't trust the marketing. Verify the link between your cloud providers using iperf3. Run this from your edge node back to your CoolVDS core:
iperf3 -c 185.xxx.xxx.xxx -P 4 -R
If you aren't seeing near-line speed, check your MTU settings inside the WireGuard tunnel. Fragmentation is the enemy of throughput.
Conclusion
Multi-cloud in 2021 isn't about using every service menu item from Azure and Google. It's about strategic placement of assets. It's about recognizing that while US tech giants offer great toolkits, they carry legal baggage and unpredictable costs.
For Norwegian enterprises, the pragmatic architecture is clear: Keep the data home. Keep the logic local. Use the global cloud for reach, not for storage.
If you need a node that respects both your budget and GDPR data residency requirements, stop over-provisioning IOPS in Frankfurt. Deploy a high-performance NVMe instance on CoolVDS today and build a core that actually owns its data.