Console Login

The Pragmatist’s Guide to Multi-Cloud in 2021: Avoiding the Egress Trap and Schrems II Nightmares

The Multi-Cloud Lie: Why Redundancy is Costing You Double

Most CTOs I speak with in Oslo treat multi-cloud as a checkbox. They deploy a Kubernetes cluster on AWS, mirror it on Azure, and call it a day. Then they look at the bill. Then they talk to their legal team about Schrems II. Then they panic.

True multi-cloud strategy isn't about mirroring everything everywhere. It is about arbitrage. It involves placing workloads where they perform best financially, technically, and legally. In 2021, with the fallout from the EU-US Privacy Shield invalidation, relying solely on US hyperscalers for Norwegian user data is a compliance roulette game you will eventually lose.

I recently audited a SaaS platform serving the Nordic market. They were hosting 100% on AWS in Frankfurt (eu-central-1). Their latency to Norwegian end-users was decent (~25ms), but their egress fees were astronomical, and their Data Processing Agreement (DPA) was under scrutiny by Datatilsynet. We moved their core database and heavy storage to local NVMe instances in Norway, kept the compute-heavy AI processing on AWS Spot Instances, and bridged them with WireGuard. The result? A 40% drop in TCO and instant GDPR compliance for data at rest.

The Architecture: The "Sovereign Core" Model

The most effective pattern for European businesses right now is the Sovereign Core. You keep your PII (Personally Identifiable Information) and persistent data on a jurisdiction-safe provider (like CoolVDS in Norway), and use hyperscalers strictly for ephemeral compute or global edge delivery.

1. The Connectivity Layer: WireGuard Mesh

Forget IPsec. It is bloated, slow to handshake, and hell to debug in hybrid environments. Since kernel 5.6, WireGuard is the standard. It is stateless and handles roaming IP addresses seamlessly, which is perfect when your hyperscaler nodes might restart.

Here is a production-ready configuration to link an AWS compute node securely to your CoolVDS database node in Oslo. We use a standardized port and aggressive keepalives to punch through NAT.

On the CoolVDS (Core) Node:

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
ListenPort = 51820
PrivateKey = <CORE_PRIVATE_KEY>

# AWS Compute Node Peer
[Peer]
PublicKey = <AWS_PUBLIC_KEY>
AllowedIPs = 10.100.0.2/32
PersistentKeepalive = 25

On the AWS (Edge) Node:

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.2/24
PrivateKey = <AWS_PRIVATE_KEY>

# CoolVDS Core Peer
[Peer]
PublicKey = <CORE_PUBLIC_KEY>
Endpoint = vps-norway.coolvds.com:51820
AllowedIPs = 10.100.0.0/24
PersistentKeepalive = 25
Pro Tip: Always set PersistentKeepalive = 25. AWS security groups and NAT gateways often drop idle UDP connections after 60 seconds. This setting keeps the tunnel alive without significant overhead.

2. Infrastructure as Code: Managing Hybrid State

Managing two providers manually is a recipe for disaster. Terraform v1.0 (released earlier this year) makes this manageable. The trick is not to abstract everything away, but to explicitly define the relationship between the providers.

Here is how you structure a main.tf to deploy a stateless frontend on AWS that talks to a stateful backend on CoolVDS. Note the provider separation.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
    local_provider = {
      # Assuming a generic libvirt or OpenStack provider for bare metal/VPS control
      source = "dmacvicar/libvirt"
      version = "0.6.11"
    }
  }
}

provider "aws" {
  region = "eu-central-1"
}

provider "local_provider" {
  uri = "qemu+ssh://root@core.coolvds.com/system"
}

# The Core Database (Data stays in Norway)
resource "libvirt_domain" "db_core" {
  name   = "postgres-primary"
  memory = "8192"
  vcpu   = 4
  
  network_interface {
    network_name = "default"
  }
  
  disk {
    volume_id = libvirt_volume.os_image.id
  }
}

# The Ephemeral Frontend (Scale out in AWS)
resource "aws_instance" "frontend" {
  ami           = "ami-05d34d340fb1d89e5" # Amazon Linux 2
  instance_type = "t3.micro"
  
  tags = {
    Name = "Stateless-Frontend"
  }
  
  user_data = <<-EOF
              #!/bin/bash
              yum install -y wireguard-tools
              # Script to auto-join the mesh...
              EOF
}

The Latency Reality Check

Physics is the only hard limit in IT. If your users are in Norway, serving them from Frankfurt adds round-trip time (RTT). Serving them from US-East adds significant lag.

I ran a simple `mtr` (My Traceroute) comparison from a residential fiber connection in Oslo (Telenor) to three endpoints:

Target Location Avg Latency Jitter
CoolVDS NVMe Instance Oslo (NIX) 1.8 ms 0.2 ms
AWS eu-central-1 Frankfurt 24.5 ms 4.1 ms
Google Cloud us-east1 South Carolina 108.2 ms 12.5 ms

For a static blog, 24ms is fine. For a high-frequency trading bot, a real-time multiplayer game server, or a high-throughput database transaction, that 20ms differential is an eternity. It is the difference between a snappy UI and a "loading" spinner.

Data Egress: The Silent Budget Killer

Hyperscalers operate on the "Hotel California" model: you can check out any time you like, but your data can never leave (without paying). AWS charges roughly $0.09 per GB for egress. If you are serving media or performing large backups from AWS to an on-premise location, this scales poorly.

CoolVDS offers generous bandwidth pools included in the monthly price. By caching static assets on a CoolVDS instance and using it as the origin server for a CDN, you bypass the heavy egress taxes of the major clouds. You treat the cloud as a compute engine, not a storage locker.

Configuring High-Availability Load Balancing

To make this hybrid setup robust, you need a load balancer that is aware of both environments. HAProxy is the industry standard for this. Below is a snippet for haproxy.cfg that prioritizes the local node but spills over to cloud instances if load spikes.

global
    log /dev/log local0
    maxconn 2000

defaults
    mode http
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms

frontend http_front
    bind *:80
    default_backend hybrid_cluster

backend hybrid_cluster
    balance roundrobin
    # Primary Local Node (CoolVDS) - Weight 100
    server local_node 10.100.0.1:80 check weight 100
    
    # Cloud Burst Node (AWS via WireGuard) - Weight 10 (Backup)
    server aws_node 10.100.0.2:80 check weight 10 backup

This configuration ensures that 90%+ of your traffic hits the fixed-cost, low-latency local server. The AWS node only takes traffic when the local node is overwhelmed or down, keeping your variable cloud costs to a minimum.

Compliance and the "Schrems II" Factor

Since the CJEU ruling in July 2020, transferring personal data to US-owned cloud providers (even their EU regions) requires supplementary measures. Encryption is often not enough if the keys are managed by the US provider (due to the CLOUD Act).

Hosting your database on a purely European provider like CoolVDS mitigates this risk significantly. You are not relying on Standard Contractual Clauses (SCCs) to save you; you are relying on the fact that your data physically resides on hardware owned by a European entity in a Norwegian datacenter. That is architectural compliance.

Final Thoughts

Don't build multi-cloud just to feel modern. Build it to save money and protect your data. Start by moving your database and core logic to a high-performance, fixed-cost environment. Keep the cloud for what it’s good at: bursting and global edge distribution.

If you need a reference point for stability, spin up a CoolVDS instance. We use KVM virtualization (kernel-based virtual machine) to ensure your resources are actually yours, not oversold containers. Test the latency from your office in Oslo—seeing single-digit milliseconds on your terminal is a feeling that never gets old.

Ready to fix your latency? Deploy a high-performance NVMe KVM instance in Oslo today.