Console Login

Multi-Cloud Architecture in 2021: Surviving Schrems II with a Hybrid Strategy

A Pragmatic Multi-Cloud Architecture for 2021: Compliance, Latency, and TCO

Let’s be honest: for most of the last decade, "multi-cloud" was just a slide in a consultant's pitch deck designed to sell complexity. It was theoretically sound but practically a nightmare of egress fees and fragmented tooling.

Then July 2020 happened. The CJEU invalidated the Privacy Shield framework (Schrems II). Suddenly, rely strictly on US hyperscalers for storing European user data became a legal minefield. If you are a CTO in Oslo or Bergen right now, you aren't looking at multi-cloud because it's trendy. You are looking at it because Datatilsynet (The Norwegian Data Protection Authority) is watching, and relying solely on a simplified standard contractual clause (SCC) might not save you.

The goal of this guide isn't to tell you to abandon AWS or Google Cloud. It's to show you how to architect a hybrid strategy where your critical PII (Personally Identifiable Information) stays on sovereign soil—specifically VPS Norway infrastructure—while you leverage commodity compute elsewhere. We will prioritize Total Cost of Ownership (TCO) and technical simplicity.

The Architecture: The "Data Sovereign" Core

In a pragmatic multi-cloud setup, we treat the providers differently based on their legal and technical utility. We don't mirror everything everywhere; that leads to split-brain scenarios and doubled storage costs.

Instead, we use a Hub-and-Spoke model:

  • The Hub (CoolVDS): Hosted in Norway. Holds the primary database (Master), customer PII, and handles authentication. This satisfies GDPR residency requirements.
  • The Spokes (Hyperscalers): Stateless frontend workers, asset processing, or heavy compute jobs. They connect back to the Hub only to fetch necessary data via encrypted tunnels.
Pro Tip: Don't underestimate latency. The round-trip time (RTT) between a data center in Oslo and one in Frankfurt is roughly 15-20ms. Between Oslo and us-east-1, it's 90ms+. Design your application to be "latency-aware" by using read-replicas or aggressive caching at the edge.

The Network Layer: WireGuard Mesh

Before kernel 5.6 (released early 2020), setting up a site-to-site VPN meant wrestling with IPsec or the slow context-switching of OpenVPN. In 2021, WireGuard is the only logical choice for linking a CoolVDS instance in Norway with external compute nodes. It is performant, runs in the kernel, and has a tiny attack surface.

Here is how we configure the "Hub" (CoolVDS) interface to accept connections from our worker nodes. This assumes you are running Ubuntu 20.04 LTS.

1. The Hub Configuration (Oslo)

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [SERVER_PRIVATE_KEY]

# Peer: Worker Node 1 (Frankfurt)
[Peer]
PublicKey = [WORKER_PUBLIC_KEY]
AllowedIPs = 10.100.0.2/32

2. The Worker Configuration (External)

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.2/32
PrivateKey = [WORKER_PRIVATE_KEY]
DNS = 1.1.1.1

[Peer]
PublicKey = [SERVER_PUBLIC_KEY]
Endpoint = 203.0.113.10:51820 # The Public IP of your CoolVDS instance
AllowedIPs = 10.100.0.0/24
PersistentKeepalive = 25

With this setup, your private traffic flows over an encrypted UDP tunnel. On our NVMe storage instances, the CPU overhead for WireGuard encryption is negligible, maintaining high throughput for database queries.

Infrastructure as Code: Terraform State Management

Managing two different providers manually is a recipe for disaster. We use Terraform (v0.14 is the current standard) to orchestrate this. The trick is to avoid vendor lock-in modules. Write generic resource blocks where possible.

Here is a snippet demonstrating how to define a CoolVDS resource alongside an AWS resource in the same state file, effectively bridging the two worlds.

terraform {
  required_providers {
    coolvds = {
      source = "coolvds/coolvds" # Hypothetical provider for context
      version = "1.2.0"
    }
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

# The Sovereign Core (Norway)
resource "coolvds_instance" "db_primary" {
  region = "no-oslo-1"
  plan   = "nvme-16gb-4vcpu"
  image  = "ubuntu-20-04-x64"
  label  = "production-db-core"
  
  # Security: Only allow traffic from the WireGuard port
  firewall_rules = [
    {
      port = 51820
      protocol = "udp"
      source_ips = ["0.0.0.0/0"] # Locked down by key auth, but specific IPs preferred
    }
  ]
}

# The Stateless Compute (External)
resource "aws_instance" "worker_node" {
  ami           = "ami-0abcdef1234567890"
  instance_type = "t3.medium"
  # ... configuration to install WireGuard on boot ...
}

Database Replication: Handling the WAN

Running a database across a WAN link is risky. If you try to run synchronous replication (Galera Cluster) between Oslo and Frankfurt, your write latency will skyrocket to the speed of the slowest link.

For a pragmatic approach, we use Asynchronous Replication with GTID (Global Transaction Identifiers) in MySQL 8.0. The Master sits on CoolVDS (benefiting from local low latency access for Norwegian users and strict compliance), while read-replicas sit in the external cloud for the worker nodes to access quickly.

Critical my.cnf adjustments for WAN replication:

[mysqld]
# Enable GTID for safer failover
gtid_mode = ON
enforce_gtid_consistency = ON

# Relax durability slightly on replicas for performance (Risk: potential data loss on crash, evaluate per use-case)
# innodb_flush_log_at_trx_commit = 2 

# Network optimization
slave_net_timeout = 60
max_allowed_packet = 64M

# Binary Log settings
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7

This setup allows your external "Spokes" to read data locally without traversing the WAN for every SELECT query, while writes still go securely to the "Hub" in Norway.

The Compliance & Performance Trade-off

Why center the architecture around a provider like CoolVDS? It comes down to the physics of control. When you deploy on a massive public cloud, you are one of millions of tenants. "Noisy neighbor" issues are real, and I/O stealing happens on standard tiers.

At CoolVDS, we utilize KVM virtualization with dedicated resource allocation. For a database that acts as the source of truth for your multi-cloud setup, consistent disk I/O is non-negotiable. Our benchmarks on the new 2021 NVMe arrays show sustained random write speeds that often beat the "Provisioned IOPS" tiers of larger providers at a fraction of the cost.

Feature Hyperscale Standard Cloud CoolVDS (Norway)
Data Sovereignty Cloud Act applies (US jurisdiction) Norwegian/EEA Jurisdiction
Network Latency (to NIX) 15-30ms (routing dependent) < 2ms
Storage Performance Throttled unless you pay premium Native NVMe Pass-through
DDoS Protection Expensive add-on Standard L3/L4 ddos protection included

Final Thoughts

The post-2020 regulatory environment forces us to rethink infrastructure. We can no longer blindly deploy to `us-east-1`. A multi-cloud strategy in 2021 is about placing data where it is legally safe and compute where it is cheapest.

By using CoolVDS as your sovereign hub, you solve the compliance headache of Schrems II while ensuring your core database runs on high-performance iron. Connect it to the world with WireGuard, manage it with Terraform, and you have a platform that is robust, compliant, and cost-effective.

Ready to secure your infrastructure's core? Spin up a high-performance KVM instance in Oslo today. Deploy on CoolVDS in under 60 seconds.