The Multi-Cloud Reality Check: Survival Beyond the Hyperscalers
Let’s cut through the marketing noise. If your entire infrastructure relies on a single availability zone in Frankfurt or Ireland, you don't have a disaster recovery plan; you have a hope and a prayer. As a Systems Architect operating in the Nordic region, I've seen the "just put it on AWS" mentality backfire spectacularly when latency spikes hit the subsea cables or when billing creates a black hole in the budget.
In 2019, the conversation isn't about if you should use the cloud, but how to use it without handing over the keys to your kingdom. For Norwegian businesses, this is compounded by strict adherence to GDPR and the scrutiny of Datatilsynet (The Norwegian Data Protection Authority). The US CLOUD Act has made hosting sensitive customer data on US-owned infrastructure—even if it resides physically in Europe—a legal gray area that keeps CTOs awake at night.
This guide outlines a pragmatic multi-cloud architecture. We aren't building this for buzzwords. We are building this for resilience, data sovereignty, and raw I/O performance.
The "Core & Burst" Architecture
The most robust pattern I’ve deployed for high-traffic Nordic platforms is the "Core & Burst" model. Here is the logic:
- The Core (Stateful): Database, User Sessions, and Sensitive Data. Hosted on CoolVDS in Oslo. Why? Because you need NVMe I/O consistency that hyperscalers charge a premium for, and you need the data to legally reside in Norway under Norwegian jurisdiction.
- The Burst (Stateless): Frontend nodes, static asset processing, and auto-scaling groups. Hosted on a public cloud (AWS/GCP) to handle Black Friday traffic spikes.
This setup gives us sub-millisecond latency to the NIX (Norwegian Internet Exchange) for the database, while leveraging the global CDN capabilities of big cloud providers.
Step 1: Infrastructure as Code (Terraform 0.12)
With the recent release of Terraform 0.12, managing hybrid resources has become significantly cleaner. We stop treating our VPS instances as pets and start treating them as defined resources. Below is how we define our CoolVDS core alongside an AWS redundancy layer.
# main.tf - Terraform 0.12 Syntax
provider "aws" {
region = "eu-central-1"
}
# Our Stable Core in Oslo (CoolVDS)
resource "coolvds_instance" "core_db" {
hostname = "db-master-osl"
plan = "nvme-16gb"
location = "oslo-dc1"
image = "centos-7"
ssh_keys = [var.admin_ssh_key]
}
# Our Burst Layer in Frankfurt
resource "aws_instance" "frontend_burst" {
count = 2
ami = "ami-0cc293023f983ed53" # Amazon Linux 2
instance_type = "t3.medium"
tags = {
Name = "frontend-burst-${count.index}"
}
}Pro Tip: Always use `remote-exec` or Ansible provisioners to harden the CoolVDS instance immediately upon creation. A raw VPS on the public internet is scanned by bots within 45 seconds of boot.
Step 2: The Glue (Secure Networking)
Connecting a bare-metal VPS in Oslo to a VPC in Frankfurt requires a robust tunnel. While WireGuard is generating hype in the kernel mailing lists, it is not yet stable enough for enterprise production in 2019. We stick to the battle-tested StrongSwan (IPsec).
Latency is critical here. By hosting the core in Oslo, local Norwegian users get fast responses, but we need to ensure the connection to the Frankfurt frontend doesn't lag. Typical RTT (Round Trip Time) from Oslo to Frankfurt is ~25-30ms. Acceptable for async replication, but tight for synchronous writes.
/etc/ipsec.conf (On the CoolVDS Node):
config setup
charondebug="ike 2, knl 2, cfg 2"
uniqueids=yes
conn oslo-to-frankfurt
type=tunnel
auto=start
keyexchange=ikev2
authby=secret
left=185.x.x.x # CoolVDS Public IP
leftsubnet=10.10.1.0/24
right=3.120.x.x # AWS VPN Endpoint
rightsubnet=172.16.0.0/16
ike=aes256-sha256-modp2048!
esp=aes256-sha256!Step 3: Database Performance & Replication
If you are running MySQL 5.7 or MariaDB 10.3, default settings are not tuned for the high IOPS provided by CoolVDS NVMe storage. Public cloud instances often throttle your I/O unless you pay for "Provisioned IOPS." On CoolVDS, you get the raw speed of the physical drive. You must configure your database to use it.
Here is a production-ready `my.cnf` snippet optimized for a 16GB RAM instance running on NVMe:
[mysqld]
# InnoDB Optimization for NVMe
innodb_buffer_pool_size = 12G
innodb_log_file_size = 2G
innodb_flush_neighbors = 0 # Critical for SSD/NVMe! Turn off rotational optimization.
innodb_io_capacity = 2000 # Default is too low (200) for NVMe
innodb_io_capacity_max = 4000
# Replication Safety
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
server_id = 1
log_bin = /var/log/mysql/mysql-bin.logThe `innodb_flush_neighbors = 0` setting is crucial. Traditional spinning disks needed this to group writes. On NVMe, this actually adds latency. Turning it off reduced our write latency by 40% in benchmarks.
Failover Strategy with Nginx
You need a traffic director. We place a lightweight Nginx load balancer at the edge. It prefers the local infrastructure but fails over to the cloud if the primary is overwhelmed or unreachable.
upstream backend_nodes {
# Primary: CoolVDS Local Instances (Low Latency)
server 10.10.1.10:80 weight=5;
server 10.10.1.11:80 weight=5;
# Backup: Cloud Instances (Higher Latency, unlimited scale)
server 172.16.1.10:80 backup;
server 172.16.1.11:80 backup;
}
server {
listen 80;
server_name api.example.no;
location / {
proxy_pass http://backend_nodes;
proxy_next_upstream error timeout http_500 http_502;
proxy_connect_timeout 2s;
}
}Why This Approach Wins on TCO
The