Escaping the Vendor Trap: A Pragmatic Multi-Cloud Strategy for European Architects
Letâs be honest: the âMulti-Cloudâ buzzword usually sells more consultancy hours than it solves actual problems. In 2019, every CIO is terrified of AWS locking them in, yet most engineering teams are just blindly using proprietary APIs like Lambda or DynamoDB that make migration impossible. If you are building a system today, you need to decouple your logic from your infrastructure provider.
I have spent the last six months migrating a fintech workload from a purely US-based public cloud to a hybrid setup distributed across Frankfurt and Oslo. The lesson? Latency is physics, and bandwidth bills are the tax on poor architecture. This guide isn't about abstract concepts; it's about the messy reality of connecting disparate networks, handling state, and why a strategic footprint in Norway (specifically using high-performance providers like CoolVDS) is your best defense against data sovereignty nightmares.
The Architecture of Independence
A true multi-cloud strategy isn't just about having servers in AWS and Azure. It's about commoditizing the compute layer. If your application requires a specific cloud provider's proprietary service to run, you aren't multi-cloud; you are multi-billed.
The stack that actually works in production right now (early 2019) looks like this:
- Compute: Standard KVM-based instances (Cloud Agnostic).
- Orchestration: Kubernetes 1.13 or Docker Swarm (if you prefer simplicity).
- Infrastructure as Code: Terraform (0.11.x).
- Traffic Routing: HAProxy or NGINX at the edge.
- Data Layer: Galera Cluster or PostgreSQL with logical replication.
Step 1: The Neutral Terraform Base
Stop clicking buttons in the console. If it's not in git, it doesn't exist. We use Terraform to abstract the provider differences. While Terraform 0.12 is on the horizon, most of us are still wrangling 0.11 syntax. Here is how we structure a module to deploy a worker node on CoolVDS versus a hyperscaler, ensuring the environment variables remain identical.
# main.tf (Terraform 0.11 style)
module "coolvds_worker" {
source = "./modules/compute/coolvds"
hostname = "norway-worker-01"
region = "oslo-dc1"
flavor = "nvme.4c.8g" # High I/O for database loads
ssh_key_ids = ["${var.deploy_key_id}"]
private_net = true
}
module "aws_backup_worker" {
source = "./modules/compute/aws"
hostname = "frankfurt-backup-01"
region = "eu-central-1"
flavor = "t3.large"
ssh_key_ids = ["${var.deploy_key_id}"]
}
Pro Tip: Notice the flavor selection. On CoolVDS, standard instances come with NVMe storage by default. On AWS, you often need to provision expensive Provisioned IOPS (io1) volumes to match the I/O throughput that CoolVDS offers out of the box. For database masters, always prioritize local NVMe.
The Latency & GDPR Factor
Here is the uncomfortable truth about the "Cloud Act" in the US: it makes European enterprises nervous. While the Privacy Shield is currently in place, the winds of regulation are blowing cold. The Norwegian Datatilsynet (Data Protection Authority) is rigorous.
Keeping your primary user database (PII) on a Norwegian VPS isn't just about patriotism; it's about compliance and latency. If your customers are in Scandinavia, round-tripping to Frankfurt or Dublin adds 20-30ms. Routing locally to NIX (Norwegian Internet Exchange) keeps that under 5ms.
Networking the Clouds
You need a secure mesh. In 2019, OpenVPN is the battle-tested standard, though I am keeping a close eye on the experimental WireGuard protocols. For now, we stick to OpenVPN for site-to-site connectivity.
We deploy a gateway node on CoolVDS in Oslo to act as the primary ingress for Nordic traffic. It terminates SSL and routes requests. If the local cluster is overwhelmed, it spills over to the secondary cloud.
# haproxy.cfg - Global Load Balancing
global
log /dev/log local0
maxconn 2000
user haproxy
group haproxy
defaults
log global
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_front
bind *:80
# Redirect to HTTPS, standard practice
redirect scheme https code 301 if !{ ssl_fc }
frontend https_front
bind *:443 ssl crt /etc/ssl/certs/site.pem
default_backend nodes_primary
backend nodes_primary
balance roundrobin
option httpchk HEAD /health HTTP/1.1\r\nHost:\ localhost
# Primary Local Nodes (CoolVDS - Low Latency)
server oslo-node-1 10.10.1.5:80 check weight 100
server oslo-node-2 10.10.1.6:80 check weight 100
# Burst Cloud Nodes (Higher Latency Backup)
server aws-fra-1 192.168.50.5:80 check weight 10 backup
This configuration prioritizes the oslo-node servers. The backup directive ensures traffic only flows to Frankfurt if Oslo goes dark or hits capacity, saving massive amounts on egress fees.
Data Gravity: The Hardest Problem
Compute is stateless; data is heavy. You cannot easily "burst" a database. For a robust multi-cloud setup, I recommend a Master-Slave replication topology where the Master resides in the jurisdiction most favorable to your legal requirements.
For a project last month, we configured MySQL 5.7 with GTID replication. The Master sits on a CoolVDS NVMe instance (essential for write-heavy workloads without IO-wait locking). The Slave sits in a hyperscale cloud for analytics and disaster recovery.
# my.cnf (Master optimization for NVMe)
[mysqld]
innodb_buffer_pool_size = 4G
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 1 # ACID compliance is non-negotiable
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000 # Utilization of NVMe speeds
innodb_io_capacity_max = 4000
# Replication Settings
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
gtid_mode = ON
enforce_gtid_consistency = ON
Setting innodb_io_capacity to 2000 allows the database to utilize the high IOPS provided by the underlying NVMe storage. On standard spinning rust or throttled cloud volumes, this setting would cause IO saturation and crash performance.
The Economic Argument
Let's talk TCO (Total Cost of Ownership). Hyperscalers charge for everything: per GB of egress, per million I/O requests, per hour of IP usage.
| Resource | Hyperscale Provider (Avg) | CoolVDS (Norway) |
|---|---|---|
| vCPU | Shared / Throttled (Credits) | Dedicated / High Performance |
| Storage | Standard SSD (Extra $$ for PIOPS) | NVMe (Standard) |
| Bandwidth | Expensive Egress Fees | Generous TB allowances |
For steady-state workloadsâyour core application, your database, your queuesâpaying a premium for "elasticity" you rarely use is wasteful. Use CoolVDS for the 24/7 heavy lifting. Use the hyperscalers for what they are good at: object storage (S3) and temporary burst compute.
Conclusion
Achieving a vendor-neutral infrastructure in 2019 requires discipline. You must resist the temptation of proprietary PaaS offerings and stick to standard building blocks: Linux, Kubernetes, and standard SQL.
By anchoring your infrastructure in Norway with CoolVDS, you gain three strategic advantages: compliance with strict European privacy standards, unbeatable latency for Nordic users, and a predictable cost structure that won't blow your budget when traffic spikes.
Stop renting computers from companies that want to become your competitors. Build your own fortress.
Ready to benchmark the difference? Deploy a high-performance NVMe instance on CoolVDS today and see how 2ms latency feels.