The Multi-Cloud Lie: Why AWS Alone Isn't Enough for Nordic Ops
Let’s cut through the marketing noise. If you are a CTO or Lead Architect operating out of Oslo or Bergen in 2020, going "all-in" on a single hyperscaler like AWS or Azure is a dangerous gamble. It’s not just about the unpredictable egress fees that destroy your TCO (Total Cost of Ownership). It’s about sovereignty.
With the pending CJEU ruling on the Schrems II case, the validity of the Privacy Shield is hanging by a thread. Relying solely on US-owned infrastructure for your customer database is becoming a liability that the Datatilsynet (Norwegian Data Protection Authority) may not look kindly upon soon. We need a strategy that balances the massive compute power of public clouds with the legal safety and cost-predictability of local infrastructure.
I call this the "Sovereign Core, Public Burst" architecture.
The Architecture: Local Core, Global Reach
The concept is simple but rigorous to implement. You keep your "State" (Databases, User PII, Core Authentication) on a jurisdiction-locked platform—like a high-performance VPS in Norway—and use the hyperscalers strictly for stateless compute or global CDN delivery.
Why? Because moving terabytes of data out of AWS costs a fortune. Keeping data on a CoolVDS NVMe instance with unmetered bandwidth solves the cost unpredictability, while satisfying GDPR residency requirements.
Phase 1: Infrastructure as Code (Terraform 0.12)
Manual server provisioning is dead. To manage a multi-cloud environment without hiring a dozen sysadmins, we use HashiCorp Terraform. Below is a practical example of how to define resources across two providers: a CoolVDS instance (simulated via generic provider or custom module) for the database, and an AWS EC2 instance for the frontend.
Note: We are using Terraform v0.12 syntax here, which introduced the much-needed `for_each` and rich types.
# main.tf
provider "aws" {
region = "eu-north-1" # Stockholm is closest to Oslo for AWS
}
# The "State" Layer - Hosted on CoolVDS for Privacy & I/O Performance
resource "coolvds_instance" "db_primary" {
hostname = "db-norway-01"
plan = "nvme-16gb"
region = "oslo"
os = "ubuntu-18.04"
# In 2020, don't ignore SSH hardening
ssh_keys = [file("~/.ssh/id_rsa.pub")]
}
# The "Stateless" Layer - AWS Autoscaling Group
resource "aws_instance" "frontend" {
ami = "ami-09b69926d17730626" # Ubuntu 18.04 LTS
instance_type = "t3.medium"
tags = {
Name = "Frontend-Worker"
Env = "Production"
}
}
Pro Tip: Always use remote-exec provisioners cautiously. In a multi-cloud setup, rely on Ansible triggered after Terraform finishes to configure the software layer. It keeps your state file clean.
Phase 2: The Network Glue (OpenVPN vs. WireGuard)
Latency is the enemy of multi-cloud. The round-trip time (RTT) between Oslo (CoolVDS) and Stockholm (AWS eu-north-1) is typically 10-12ms. This is acceptable for asynchronous replication or API calls, but you need a secure tunnel.
While WireGuard is gaining traction in the Linux 5.6 kernel (currently in RC), for a production environment in early 2020, OpenVPN remains the battle-tested standard for site-to-site connectivity.
Here is a hardened server.conf for your CoolVDS gateway node to accept connections from your cloud workers:
port 1194
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
# Security hardening (AES-256-GCM is CPU efficient on modern Xeons)
cipher AES-256-GCM
auth SHA512
tls-version-min 1.2
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
keepalive 10 120
# Compress to save bandwidth, though less effective on encrypted traffic
compress lz4-v2
push "compress lz4-v2"
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
verb 3
Phase 3: Intelligent Load Balancing with HAProxy
You need a traffic cop. HAProxy is perfect for this. It can detect if your local Norwegian backend is saturated and spill over to the cloud, or vice versa.
We configure HAProxy on the edge to prioritize the CoolVDS instances (lower latency for Norwegian users, zero egress fees) and only use AWS as a backup.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
bind *:443 ssl crt /etc/ssl/private/mycert.pem
acl is_norway src -f /etc/haproxy/geoip-no.txt
# Route Norwegian traffic to local infrastructure
use_backend coolvds_cluster if is_norway
default_backend aws_cluster
backend coolvds_cluster
balance roundrobin
# Check health every 2s, require 3 fails to mark down
server web01 10.10.1.10:80 check inter 2000 rise 2 fall 3
server web02 10.10.1.11:80 check inter 2000 rise 2 fall 3
backend aws_cluster
balance leastconn
server aws01 192.168.50.5:80 check
Database Consistency: The Hard Part
Splitting an application is easy; splitting a database is a nightmare. In a multi-cloud scenario, latency limits your options. Synchronous replication (like Galera) over a WAN link with 12ms latency will kill your write performance. Every write transaction has to wait for an ACK from the remote node.
The pragmatic solution? MySQL Async Replication with GTID.
Host the Master on CoolVDS (NVMe storage is crucial here for IOPS). Host Read Replicas on AWS. This ensures your Norwegian users get instant write confirmation, while your global read traffic is offloaded.
Modify your /etc/mysql/mysql.conf.d/mysqld.cnf:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
gtid_mode = ON
enforce_gtid_consistency = ON
log_slave_updates = ON
# Optimize for NVMe storage found on CoolVDS
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_buffer_pool_size = 12G # Assumes 16GB RAM instance
Why CoolVDS fits the "Core" Role
I’ve benchmarked a lot of providers. The issue with standard VPS providers is "noisy neighbors"—other users stealing your CPU cycles. When you are building the core of a multi-cloud setup, jitter is unacceptable.
We use CoolVDS for the sovereign node because they utilize KVM virtualization with strict resource isolation. Unlike container-based VPS (OpenVZ), KVM allows us to run our own custom kernel modules if necessary for specialized networking, and the NVMe arrays provide the random I/O needed for high-transaction databases.
The Financial Reality
Consider a 4TB data transfer scenario:
- AWS: 4,000 GB * $0.09/GB = $360/month just for bandwidth.
- CoolVDS: Included in the fixed monthly price.
For a Norwegian startup, that difference is a junior developer's hardware budget.
Conclusion: Own Your Data
The cloud is a tool, not a religion. Don't lock yourself into a proprietary ecosystem that charges you to access your own data. By placing your persistence layer on a sovereign, high-performance VPS in Norway and using the public cloud only for what it's good at (elastic scale), you build a system that is robust, compliant, and cost-effective.
Don't wait for the lawyers to tell you your architecture is illegal. Build a sovereign foundation today.
Ready to secure your infrastructure? Deploy a KVM-based NVMe instance on CoolVDS in Oslo today and start building your sovereign core.