The Hybrid Cloud Reality: Surviving Schrems II with a Norway-First Strategy
Let’s be honest. For most of us operating out of Oslo or Bergen, "Multi-Cloud" used to be a resume-padding buzzword. It was something we talked about at conferences while silently deploying everything to eu-central-1 (Frankfurt) because it was easy. Then July 2020 happened.
The CJEU's Schrems II ruling didn't just invalidate the Privacy Shield; it turned the architectural diagrams of half the companies in Norway into a compliance nightmare. If you are blindly piping customer PII into US-owned hyperscalers—even their European regions—you are operating on borrowed time regarding the Datatilsynet (Norwegian Data Protection Authority).
I am not here to fear-monger. I am here to architect a solution. As a CTO, my job is balancing TCO (Total Cost of Ownership) with risk. The answer isn't abandoning AWS or GCP entirely; that's impractical for many dev teams. The answer is a Sanitized Hybrid Strategy.
The Architecture: Core vs. Edge
The pragmatic approach for 2021 is strict data segregation. We treat US hyperscalers as "dumb compute" and ephemeral storage, while the "Source of Truth" (Database, User Records) resides on sovereign Norwegian infrastructure.
Why? Because legality aside, the latency physics don't lie. If your primary market is Norway, routing traffic through Sweden or Germany adds milliseconds that pile up. Direct peering at NIX (Norwegian Internet Exchange) is a massive advantage.
The Setup
- The Vault (CoolVDS NVMe Instance): Hosts the PostgreSQL master and Redis. Located in Oslo. Protected by Norwegian privacy laws.
- The Muscle (AWS/GCP): Auto-scaling groups for stateless application logic or heavy batch processing.
- The Pipe (WireGuard): Secure, kernel-level mesh networking to link them.
Infrastructure as Code: unifying the Stack
Managing two providers without insanity requires Terraform. We stop clicking buttons in the AWS Console. We define state. Below is a simplified HCL example (Terraform 0.14+) showing how we structure a deployment where the state file remains local or on the secure private cloud, never on S3.
# main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
# Generic libvirt/KVM provider for CoolVDS
libvirt = {
source = "dmacvicar/libvirt"
version = "0.6.3"
}
}
}
provider "aws" {
region = "eu-north-1" # Stockholm
}
provider "libvirt" {
uri = "qemu+ssh://root@core.coolvds.no/system"
}This abstraction allows your team to deploy a KVM-based database server on CoolVDS just as easily as an EC2 instance. The difference? You actually own the data on the KVM instance.
The Networking Glue: WireGuard over IPsec
Historically, connecting a VPS to a VPC involved clunky OpenVPN configs or expensive Direct Connect circuits. In 2021, we use WireGuard. It was merged into the Linux Kernel 5.6 last year, and it blows IPsec out of the water regarding throughput and setup time.
Here is a production-ready config to link your CoolVDS database node (The Vault) to an external application server.
Pro Tip: Ensure your MTU is set correctly. Tunneling adds overhead. On most VPS networks, an MTU of 1360 for the WireGuard interface is safe to prevent fragmentation issues.
Server A (CoolVDS - Database Host)
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [SERVER_A_PRIVATE_KEY]
[Peer]
PublicKey = [SERVER_B_PUBLIC_KEY]
AllowedIPs = 10.100.0.2/32
Endpoint = aws-instance-ip:51820With this setup, your application servers in the public cloud communicate with the database over 10.100.0.1. The traffic is encrypted, and to the outside world, your database does not exist.
The Cost Trap: Egress Fees
This is where the "Pragmatic" part of my job title kicks in. Hyperscalers operate on a "Roach Motel" model: data checks in easily, but checking out costs a fortune. AWS data transfer OUT rates can hit $0.09/GB.
By hosting your primary database on CoolVDS, you invert this model. Most VPS providers in the Nordic market offer generous or unmetered bandwidth. You push data into AWS (free ingress) for processing, and pull only the results back. The heavy I/O—the database reads/writes—stays local on NVMe storage where you aren't charged per IOPS.
Performance Check: NVMe I/O
Compliance is useless if the site is slow. We ran fio benchmarks comparing a standard General Purpose SSD (gp2) volume against local NVMe storage typical of high-performance VPS nodes.
| Metric | Cloud Block Storage (gp2) | CoolVDS Local NVMe |
|---|---|---|
| Random Read 4k | 3,000 IOPS (Burstable) | 65,000+ IOPS |
| Latency (99th %) | ~1.2ms | ~0.08ms |
| Cost | $0.10/GB + IOPS fees | Included |
When you are running a Magento store or a heavy PostgreSQL cluster, that latency difference is the difference between a sub-second page load and a bounce.
Conclusion: The Best of Both Worlds
We are past the point of "Cloud vs. On-Prem." The smart move in 2021 is leveraging the scalability of the cloud for compute while keeping your data grounded in reality—specifically, Norwegian reality.
This architecture keeps the Datatilsynet happy, keeps your latency low for Nordic users, and keeps your CFO from having a heart attack over egress fees. Don't let your data float in a legal grey area.
Secure your foundation. Deploy a compliant, high-IOPS NVMe instance on CoolVDS today and build your hybrid fortress.