Console Login

Surviving Schrems II: A Pragmatic Multi-Cloud Strategy for Norwegian Enterprises

Multi-Cloud is a Lie (Unless You Handle Data Sovereignty Correctly)

It is October 2021. The dust from the Schrems II ruling has not settled. If you are a CTO or Lead Architect in Norway, you are currently sandwiched between two opposing forces: the business demand for infinite scalability and the Datatilsynet (Norwegian Data Protection Authority) demanding you stop sending PII (Personally Identifiable Information) to US-controlled clouds.

Most "Multi-Cloud" strategies I see are just expensive messes—billing complexities from AWS and Azure combined with the latency of routing traffic through Frankfurt while your users are sitting in Oslo.

This is not a guide on how to spend more money. This is a battle-tested architecture for a Hybrid Cloud approach that keeps your data legally safe in Norway, your latency low, and your infrastructure redundant.

The Architecture: The "Data Sovereign" Core

The pragmatic solution in 2021 isn't to abandon public clouds entirely, but to treat them as stateless compute layers while keeping the "State" (Database, Customer Data) on jurisdictionally safe, high-performance local infrastructure. We call this the Core-Edge Hybrid.

Pro Tip: Network latency between Oslo and standard European cloud regions (Frankfurt/Amsterdam) usually sits between 15ms and 25ms. For a Magento store or a high-frequency trading app, that round-trip time (RTT) on every database query kills your Time to First Byte (TTFB).

Step 1: The Infrastructure Layer (Terraform)

We need a single control plane. Terraform v1.0 (released earlier this year) is the standard. Do not click buttons in a GUI. If it's not in git, it doesn't exist.

Here is how we structure a provider-agnostic deployment. We use CoolVDS KVM instances for the Data Core (MySQL/PostgreSQL) because of the dedicated NVMe I/O, and a secondary provider for burstable front-end traffic.

# main.tf structure

terraform {
  required_providers {
    coolvds = {
      source = "coolvds/provider" # Hypothetical local provider or generic KVM/OpenStack
      version = "~> 1.2"
    }
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.63"
    }
  }
}

# The Safe Haven: Database Node in Oslo
resource "coolvds_instance" "db_primary" {
  region    = "no-oslo-1"
  image     = "debian-11" 
  plan      = "nvme-dedicated-8cpu"
  label     = "production-db-core"
  
  # Essential for compliance: Data resides physically in Norway
  tags = ["gdpr-compliant", "sovereign"]
}

Step 2: The Network Mesh (WireGuard)

IPsec is bloated. OpenVPN is slow. In late 2021, if you aren't using WireGuard for your cross-cloud interconnects, you are wasting CPU cycles. WireGuard runs in the kernel (Linux 5.6+), offering lower latency and smaller attack surfaces.

We use this to create a secure, encrypted tunnel between your CoolVDS database in Oslo and your stateless application servers, regardless of where they live.

Config on the CoolVDS Node (The Hub):

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.0.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [SERVER_PRIVATE_KEY]

# Peer: Application Server (Stateless)
[Peer]
PublicKey = [CLIENT_PUBLIC_KEY]
AllowedIPs = 10.0.0.2/32

To bring this interface up without installing heavy tools:

sudo wg-quick up wg0

Performance: The NVMe Factor

A common lie in the VPS industry is "SSD Storage." Often, this is network-attached storage (Ceph or GlusterFS) sharing bandwidth with hundreds of other noisy neighbors. When your database does a full table scan, your I/O Wait spikes, and your site stalls.

At CoolVDS, we use local NVMe storage passed through via KVM. The difference is visible in iostat.

Benchmark Command (Fio):

fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1

On a standard cloud "General Purpose" SSD, you might see 300-600 IOPS. On a CoolVDS NVMe instance, we routinely clock 15,000+ IOPS. For a PostgreSQL database handling heavy write loads, this is the difference between a smooth checkout and a timeout error.

High Availability with HAProxy

Redundancy is the only reason to tolerate the complexity of multi-cloud. If the Oslo power grid has a hiccup (rare, but possible), you need failover. We place HAProxy at the edge.

Here is a snippet for /etc/haproxy/haproxy.cfg that health-checks your backends and respects the local preference (sending traffic to Oslo first).

frontend http_front
   bind *:80
   bind *:443 ssl crt /etc/ssl/certs/site.pem
   default_backend web_servers

backend web_servers
   balance roundrobin
   option httpchk HEAD /health HTTP/1.1\r\nHost:\ localhost
   # Primary: CoolVDS Oslo (Low Latency)
   server web-oslo 10.0.0.1:80 check weight 100
   
   # Backup: Secondary Location (Higher Latency, but Redundant)
   server web-backup 10.0.0.2:80 check weight 50 backup

The Local Advantage: NIX and Peering

Latency is physics. You cannot beat the speed of light. If your customers are in Norway, hosting in Norway is mandatory for performance. CoolVDS peers directly at NIX (Norwegian Internet Exchange). This means traffic from Telenor or Telia fiber users hits your server in 1-3ms, bypassing the congested international transit routes.

Check your current latency. If you are seeing double digits from Oslo to your server, you are losing SEO ranking.

# Run this from your local machine in Norway
mtr --report --report-cycles=10 your-server-ip

Conclusion: Compliance Meets Performance

The era of blindly deploying to US-based hyperscalers is over. Between GDPR, Schrems II, and the need for raw I/O performance, the smart money is moving back to specialized, local infrastructure for the heavy lifting, while using public clouds only for what they are good at: commodity storage and CDN.

Stop risking fines from the Datatilsynet and stop accepting high I/O wait times.

Next Step: Deploy a Debian 11 instance on CoolVDS today. Benchmark the NVMe storage against your current provider. If we aren't at least 5x faster on random writes, we don't deserve your business.