Console Login

The Pragmatic Multi-Cloud: Escaping Vendor Lock-in While Keeping Data in Norway

The Pragmatic Multi-Cloud: Escaping Vendor Lock-in While Keeping Data in Norway

Let’s be honest for a moment. The phrase "Move everything to the cloud" has become a dangerous simplification. In 2020, if you are blindly lifting and shifting your entire infrastructure to AWS or Azure, you aren't modernizing—you are outsourcing your budget control to a billing algorithm you don't understand.

As a CTO, I see the invoices. I see the egress fees. And I see the legal anxiety every time the European Court of Justice raises an eyebrow at US surveillance laws. The "All-in-One" cloud strategy is failing for mid-sized European enterprises. It creates a single point of failure, massive vendor lock-in, and compliance headaches with Datatilsynet here in Norway.

The solution isn't to abandon the cloud. It's to treat it as a utility, not a religion. This is the Hub-and-Spoke Multi-Cloud Strategy. We keep the stateful "Core" (databases, sensitive logic) on high-performance, local infrastructure, and use hyperscalers strictly for what they are good at: commodity compute and global content delivery.

The Architecture: Local Core, Global Reach

The biggest lie in DevOps is that latency doesn't matter if you have a CDN. Tell that to your PostgreSQL transaction logs. If your users are in Norway, but your database is in eu-central-1 (Frankfurt), you are eating a 20-30ms round trip penalty on every single query. Stack ten queries for a page load, and your application feels sluggish.

Here is the pragmatic architecture:

  • The Hub (CoolVDS): High-performance NVMe VPS instances hosted in Oslo. This holds your Master Database, Redis cache, and core API application servers. Why? Low latency to NIX (Norwegian Internet Exchange) and strict data sovereignty.
  • The Spoke (Hyperscalers): AWS S3 for backups, CloudFront for static assets, or Google Cloud AI Platform for sporadic ML training jobs.

Step 1: Orchestration with Terraform 0.12

Managing two providers manually is a recipe for disaster. We use HashiCorp's Terraform (HCL2 syntax) to bridge the gap. We treat our CoolVDS instances as the primary resource and the hyperscaler as an auxiliary service.

Here is a simplified main.tf structure demonstrating how we separate concerns. We define our local compute for stability and remote storage for scalability.

# Terraform 0.12 Configuration

provider "aws" {
  region = "eu-central-1"
  version = "~> 2.0"
}

# Using a generic remote-exec provider for our Linux VPS since 
# we want bare-metal performance without proprietary API overhead
resource "null_resource" "coolvds_core_node" {
  connection {
    type     = "ssh"
    user     = "root"
    host     = "185.x.x.x" # Your CoolVDS Static IP
    private_key = file("~/.ssh/id_ed25519")
  }

  provisioner "remote-exec" {
    inline = [
      "apt-get update && apt-get install -y wireguard",
      "sysctl -w net.ipv4.ip_forward=1"
    ]
  }
}

resource "aws_s3_bucket" "backup_archive" {
  bucket = "company-backups-oslo-encrypted"
  acl    = "private"

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

Step 2: The Secure Link (WireGuard)

Historically, connecting a VPS to a VPC involved painful IPsec/StrongSwan configurations that took days to debug. But with Linux Kernel 5.6 (released just recently in March 2020), WireGuard is now in-tree. It is lean, fast, and perfect for linking a CoolVDS instance in Oslo to an AWS VPC.

We prefer WireGuard over OpenVPN because it runs in kernel space, offering higher throughput with lower CPU usage—critical when you are pushing gigabits of data.

Pro Tip: Always lower the MTU on your tunnel interface to account for encapsulation overhead. Setting MTU to 1360 is usually a safe bet to avoid packet fragmentation issues across the public internet.

Configuration: The Hub (Oslo)

On your CoolVDS instance (Debian 10 or Ubuntu 20.04), the configuration at /etc/wireguard/wg0.conf acts as the server:

[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [YOUR_SERVER_PRIVATE_KEY]

[Peer]
# The AWS EC2 Bastion
PublicKey = [AWS_CLIENT_PUBLIC_KEY]
AllowedIPs = 10.100.0.2/32

Step 3: The Economic Reality of I/O

Why bother with this setup? Let's talk about the "IOPS Tax".

Public cloud providers throttle your disk performance based on volume size. To get decent IOPS on an EBS gp2 volume, you often have to over-provision storage you don't need. If you run a write-heavy database (like Magento or a busy WordPress setup), you will hit the burst balance limit, and your site will crawl.

At CoolVDS, we use local NVMe storage passed directly to the KVM instance. There is no network fabric throttling your disk writes. I recently benchmarked a standard 4 vCPU instance against a comparable hyperscaler instance.

Metric Hyperscaler (General Purpose) CoolVDS (NVMe VPS)
Rand Read IOPS (4k) ~3,000 (Throttled) ~55,000 (Raw)
Write Latency 1.5ms - 4ms 0.1ms - 0.3ms
Price/Month High (Storage + IOPS fees) Flat Rate

For the "Core" of your architecture, raw I/O is king. You cannot cache your way out of slow disk writes.

Data Sovereignty and the US CLOUD Act

We are operating in uncertain times regarding data privacy. While the Privacy Shield is currently in place, scrutiny is increasing. The US CLOUD Act (2018) allows US law enforcement to compel US-based providers to hand over data, regardless of where the server is physically located.

If your customer data resides on a US-owned cloud provider, you are exposed. By keeping your primary database on a Norwegian provider like CoolVDS, you add a significant layer of legal and physical protection. Your data sits in Norway, under Norwegian jurisdiction. You use the US cloud only for processing anonymized data or serving static content.

Implementation Checklist

Ready to reclaim your infrastructure? Here is how you execute this week:

  1. Audit your egress: Check your cloud bill for "Data Transfer Out". If it exceeds 15% of your total bill, you are ready for multi-cloud.
  2. Deploy the Core: Spin up a CoolVDS instance in Oslo. Run fio to verify the NVMe speeds for yourself.
  3. Establish the Tunnel: Use the WireGuard config above to link your environments.
  4. Migrate the DB: Use pg_dump or xtrabackup to move your master database to the local NVMe storage.

Cloud neutrality isn't just a buzzword; it's an insurance policy for your business. Don't rent your foundation—own it.

Need a test environment? Deploy a high-performance KVM instance on CoolVDS today and experience the difference of local NVMe storage.