The Pragmatic Multi-Cloud: Escaping Vendor Lock-in and the CLOUD Act in 2020
Let’s cut through the marketing noise. For most CTOs and Systems Architects in Oslo or Bergen, "Multi-Cloud" isn't about connecting AWS to Azure just because you can. It is an insurance policy. It is about simple economics and legal survival.
With the Advocate General's opinion on the Schrems II case released just last week (December 19, 2019), the stability of the Privacy Shield framework is looking increasingly fragile. If you are storing critical Norwegian customer data exclusively on US-controlled hyperscalers, you are exposing your organization to a legal blast radius that the Datatilsynet will not ignore.
Beyond compliance, there is the issue of performance per krone. Public cloud IOPS are exorbitantly priced. We are seeing clients migrate database workloads back to high-performance VDS (Virtual Dedicated Servers) simply because the cost of provisioned IOPS on RDS or Cloud SQL destroys their margins.
This guide outlines a Hybrid Multi-Cloud Architecture: keeping your stateless frontends on auto-scaling public clouds while anchoring your data gravity (databases, storage) on local, high-performance infrastructure like CoolVDS. This ensures data sovereignty within Norway and stabilizes costs.
The Architecture: Frontend Elasticity, Backend Sovereignty
The goal is to leverage the strengths of each environment:
- Public Cloud (e.g., Frankfurt Region): Handles bursty HTTP traffic, K8s clusters for stateless microservices.
- CoolVDS (Oslo/Europe): Hosts the Master Database (MySQL/PostgreSQL) and persistent storage on unthrottled NVMe.
This setup significantly reduces latency for Norwegian users (ping from Oslo to local providers is often <3ms vs 25-30ms to Frankfurt) and keeps the "Crown Jewels" (the data) under stricter jurisdiction.
Step 1: Orchestration with Terraform 0.12
Terraform 0.12 brought massive improvements to HCL earlier this year. Here is how we structure a hybrid deployment. We provision the security group in AWS to allow traffic strictly from our CoolVDS static IP.
# main.tf (Terraform 0.12 syntax)
provider "aws" {
region = "eu-central-1"
}
variable "coolvds_ip" {
type = string
description = "The static IP of our CoolVDS database node in Oslo"
}
resource "aws_security_group" "allow_database_bridge" {
name = "allow_vpn_coolvds"
description = "Allow IPsec traffic from CoolVDS"
ingress {
from_port = 500
to_port = 500
protocol = "udp"
cidr_blocks = ["${var.coolvds_ip}/32"]
}
ingress {
from_port = 4500
to_port = 4500
protocol = "udp"
cidr_blocks = ["${var.coolvds_ip}/32"]
}
}
Pro Tip: Never hardcode IPs. Pass your CoolVDS IP as a variable during the terraform apply phase to keep your infrastructure code portable.
Step 2: Securing the Bridge (Site-to-Site VPN)
Since your database is now external to the public cloud VPC, you must secure the link. While WireGuard is gaining traction, it is not yet in the mainline Linux kernel as of late 2019. For production environments, we stick to the battle-tested StrongSwan (IPsec).
Here is a production-ready ipsec.conf for the CoolVDS node (Left) connecting to an AWS VPN Gateway (Right).
# /etc/ipsec.conf on CoolVDS Node (Debian 10 / CentOS 7)
config setup
charondebug="ike 2, knl 2, cfg 2"
conn oslo-to-frankfurt
authby=secret
auto=start
keyexchange=ikev2
ike=aes256-sha256-modp2048!
esp=aes256-sha256-modp2048!
# Local CoolVDS settings
left=%defaultroute
leftid=185.x.x.x # Your CoolVDS Public IP
leftsubnet=10.10.1.0/24
# Remote Cloud settings
right=3.x.x.x # AWS VPN Public IP
rightsubnet=172.16.0.0/16
type=tunnel
dpddelay=30
dpdtimeout=120
dpdaction=restart
Verify the latency immediately after establishing the tunnel. You cannot afford jitter in a database connection.
iperf3 -c 172.16.0.5 -t 30 -P 4
Step 3: Database Performance Tuning on NVMe
One of the primary reasons to move the database out of the hyperscaler is disk I/O. Public clouds use token buckets for IOPS. If you deplete your burst balance, your query performance falls off a cliff.
On CoolVDS, we utilize direct-attached NVMe storage. To take advantage of this, you must tune your MariaDB 10.4 or MySQL 8.0 configuration. The default settings assume spinning rust (HDDs).
# /etc/mysql/my.cnf
[mysqld]
# NVMe Optimization
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_neighbors = 0
# Memory - assume 16GB RAM VDS dedicated to DB
innodb_buffer_pool_size = 12G
innodb_log_file_size = 2G
# Replication Stability for Hybrid Cloud
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
Setting innodb_flush_neighbors = 0 is critical for SSD/NVMe storage. The old method of grouping writes was designed for rotating platters; on NVMe, it just adds latency.
The Latency Trade-off
If your application requires strict ACID compliance, the network latency between Oslo (CoolVDS) and Frankfurt (App Servers) typically sits around 20-25ms. For most web applications, this is negligible. However, if you have a "chatty" application that makes 50 sequential queries per page load, that 20ms adds up to a full second of wait time.
The Solution: Implement aggressive caching (Redis/Memcached) on the application side (in the cloud) to reduce read-trips to the database in Oslo.
Data Sovereignty and Compliance
By keeping the database file system on a server physically located in Norway (or the EEA), you simplify your GDPR compliance posture. You can demonstrate that the data at rest resides within the correct jurisdiction. While the application processes data in the cloud, the permanent record—the source of truth—is under your direct control on CoolVDS hardware.
| Feature | Public Cloud RDS | CoolVDS Managed VPS |
|---|---|---|
| Storage Backend | Network Attached (EBS/Persistent Disk) | Local NVMe (PCIe) |
| Latency to NIX (Oslo) | ~25ms (from Frankfurt/London) | < 3ms |
| Data Jurisdiction | US Company (CLOUD Act applies) | Norwegian/EU Jurisdiction |
| Cost per 1k IOPS | $$$ (Provisioned IOPS) | Included |
Conclusion
We are entering a new decade where blindly trusting a single US-based provider is a risk to business continuity. The "One Cloud" era is ending. The smart money in 2020 will be on hybrid architectures that use commodity compute for scaling and premium, local infrastructure for data integrity.
If you are ready to architect a backend that respects both your budget and your user's privacy, you need raw performance without the noisy neighbors.
Check your current I/O wait times. If they are creeping up, deploy a CoolVDS NVMe instance in 55 seconds and benchmark the difference yourself.