Multi-Cloud Architectures in a Post-Schrems II World: A Technical Guide
The CJEU ruling on July 16, 2020, invalidating the EU-US Privacy Shield, didn't just ruffle feathers—it broke the compliance backbone of half the startups in Oslo. If you are a CTO relying purely on AWS or Google Cloud to host Norwegian patient data or financial records, you are currently operating in a legal grey zone that Datatilsynet (The Norwegian Data Protection Authority) will not ignore for long.
But let's put the legal panic aside for a moment. As systems architects, we know that multi-cloud isn't just about GDPR compliance. It's about survival. I've seen entire availability zones vanish due to fiber cuts. I've seen routing tables melt. If you don't have a contingency, you don't have a product.
This guide isn't about abstract theory. We are going to look at how to architect a split-stack solution using Terraform v0.12, the recently mainlined WireGuard for secure mesh networking, and HAProxy to balance traffic between a CoolVDS NVMe instance in Oslo (for data sovereignty) and secondary nodes elsewhere.
The Architecture: The "Data Fortress" Pattern
The safest pattern in late 2020 is the "Data Fortress." You keep your stateful layer (databases, object storage containing PII) on a sovereign Norwegian provider like CoolVDS to ensure data never leaves the EEA without strict encryption controls. You then treat your compute layer as ephemeral.
The Stack:
- Infrastructure as Code: Terraform 0.12.28
- Networking: WireGuard (Linux Kernel 5.6+)
- Load Balancing: HAProxy 2.2
- Database: MariaDB 10.4 (Galera Cluster)
1. Orchestrating the Hybrid Estate with Terraform
Managing two different providers manually is a recipe for drift. We use Terraform to abstract this. Below is a practical example of how to structure your main.tf to deploy resources across CoolVDS (via KVM/OpenStack provider) and a secondary provider simultaneously.
# main.tf - Terraform 0.12 Syntax
terraform {
required_version = ">= 0.12"
}
# Provider 1: CoolVDS (The Norwegian Sovereign Node)
provider "openstack" {
alias = "oslo"
auth_url = "https://api.coolvds.com/v3"
region = "NO-Oslo-1"
}
# Provider 2: Secondary EU Location (Disaster Recovery)
provider "aws" {
alias = "frankfurt"
region = "eu-central-1"
}
resource "openstack_compute_instance_v2" "db_primary" {
provider = openstack.oslo
name = "coolvds-nvme-db-01"
image_name = "Ubuntu 20.04"
flavor_name = "v2-highcpu-nvme"
key_pair = "deploy_key_2020"
security_groups = ["default", "secure-internal"]
metadata = {
role = "primary-db"
}
}
resource "aws_instance" "app_worker" {
provider = aws.frankfurt
ami = "ami-05f7491af5eef733a" # Ubuntu 20.04 LTS
instance_type = "c5.large"
tags = {
Name = "worker-node-dr"
}
}
Notice the selection of flavor_name = "v2-highcpu-nvme". In 2020, spinning rust (HDD) or standard SSDs just don't cut it for primary database locks. If your I/O Wait creeps above 5%, your application feels sluggish regardless of how much CPU you throw at it. CoolVDS NVMe instances typically push 50k+ IOPS, which is essential when the latency between your app server and database might span 20-30ms.
2. The Network Mesh: WireGuard over IPsec
Historically, connecting clouds meant bloated IPsec setups (StrongSwan/Libreswan) that were a pain to debug. With the release of Linux Kernel 5.6 earlier this year, WireGuard is finally stable and in-tree. It is faster, has a smaller attack surface, and reconnects instantly upon roaming.
Here is how we configure the CoolVDS node in Oslo to accept traffic from the external worker node. We use port 51820 UDP.
# /etc/wireguard/wg0.conf on CoolVDS (Oslo)
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = <SERVER_PRIVATE_KEY>
[Peer]
# The Worker Node in Frankfurt
PublicKey = <WORKER_PUBLIC_KEY>
AllowedIPs = 10.100.0.2/32
Endpoint = 192.0.2.45:51820 # External IP of secondary node
Pro Tip: Always set the MTU explicitly if you are tunneling over networks with varying jumbo frame support. For cross-cloud tunnels, an MTU of 1360 is usually a safe bet to avoid fragmentation headaches that manifest as hanging TLS handshakes.
3. Smart Load Balancing with HAProxy
You need an intelligent traffic director. HAProxy 2.2 (released just last month, June 2020) brings excellent improvements in health checking. We don't want to route traffic to the secondary cloud unless the primary CoolVDS node is actually down or overwhelmed.
We configure HAProxy to prefer the local Oslo backend (low latency) and only bleed traffic to the backup if connection counts spike or health checks fail.
# haproxy.cfg
global
log /dev/log local0
maxconn 2000
user haproxy
group haproxy
defaults
mode http
timeout client 10s
timeout connect 5s
timeout server 10s
frontend http_front
bind *:80
bind *:443 ssl crt /etc/ssl/private/site.pem
acl is_oslo_down nbsrv(oslo_primary) lt 1
use_backend frankfurt_backup if is_oslo_down
default_backend oslo_primary
backend oslo_primary
balance roundrobin
# CoolVDS Internal IP via WireGuard or local LAN
server web01 10.100.0.1:80 check inter 2000 rise 2 fall 3
backend frankfurt_backup
balance roundrobin
server dr01 10.100.0.2:80 check inter 2000 rise 2 fall 3 backup
4. Database Consistency and Latency
The biggest challenge in multi-cloud is the speed of light. The round-trip time (RTT) between Oslo and Frankfurt is roughly 20-25ms. For a synchronous replication setup like Galera, this adds overhead to every write commit.
If you must run active-active, you need to tune your MySQL/MariaDB configuration to handle higher latency without locking up threads.
# my.cnf optimization for higher latency links
[mysqld]
# Increase connections to handle threads waiting on network ACK
max_connections = 1000
# Adjust timeout to prevent false positives on network blips
slave_net_timeout = 60
# Critical for performance, but understand the ACID trade-off
# 1 = safest (sync to disk), 2 = faster (OS cache), 0 = fastest (risky)
# On CoolVDS NVMe, '1' is usually fast enough. For cross-cloud, consider '2'.
innodb_flush_log_at_trx_commit = 1
# Buffer Pool - Set to 70% of RAM
innodb_buffer_pool_size = 6G
# Logs
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
The Sovereignty Advantage
Technical implementation is only half the battle. The other half is explaining to your Board why you aren't putting all your eggs in a US-controlled basket. Post-Schrems II, the legal standing of data transfer is shaky. By keeping the core database on CoolVDS hardware in Norway, you simplify your GDPR record of processing activities significantly.
Furthermore, local peering matters. CoolVDS peers directly at NIX (Norwegian Internet Exchange). If your customer base is in Scandinavia, routing their traffic through a central European data center adds unnecessary hops. Direct peering means lower latency, faster page loads, and better Core Web Vitals (which Google has hinted will become a ranking factor next year).
Final Thoughts
Multi-cloud in 2020 is not about complexity for complexity's sake. It is about hedging your bets against vendor lock-in and legal volatility. Start small. Deploy a CoolVDS instance as your master node, set up a WireGuard tunnel, and verify your replication lag.
Don't wait for the lawyers to tell you your architecture is non-compliant. Build the fortress now.
Ready to secure your data sovereignty? Deploy a privacy-compliant NVMe VPS on CoolVDS today and get your WireGuard keys generated in under 60 seconds.