Console Login

The Pragmatic CTO’s Guide to Multi-Cloud in Norway: Beating Latency and Lock-in (2020 Edition)

The Pragmatic CTO’s Guide to Multi-Cloud in Norway: Beating Latency and Lock-in

If you are running a purely monolithic stack on a single hyperscaler in 2020, you are likely bleeding money. Conversely, if you are trying to split a WordPress site across three different cloud providers just to say you use "Multi-Cloud," you are engineering your own funeral. There is a middle ground. It is messy, it requires precise configuration, but it is the only way to balance the raw scale of AWS/GCP with the data sovereignty and latency requirements we face here in Norway.

The premise is simple: Public cloud for elastic compute. Local infrastructure for state and heavy I/O.

We are seeing a shift. The "lift and shift" migration to the cloud that dominated 2016-2018 has resulted in massive bills and unexpected latency issues for Nordic users. When your users are in Oslo, but your database is in Frankfurt (eu-central-1), the laws of physics apply. Light speed is fast, but network hops are slow.

The Architecture: The "Hybrid Core" Strategy

The most effective pattern I deployed this quarter involves a split-stack approach. We treat the hyperscaler (AWS/Azure) as a dumb commodity for CPU cycles and use a high-performance local VPS provider like CoolVDS for the "State Layer" (Database, Storage, PII).

Why this works (The Math):

  • Egress Fees: AWS charges exorbitant rates for data leaving their network. If your heavy assets (images, backups, datasets) live on CoolVDS, you pay a fraction of the cost for bandwidth.
  • Latency: Round trip time (RTT) from Oslo to Frankfurt is typically 25-35ms. RTT from Oslo to a local CoolVDS node is often sub-2ms. For a database making 50 queries to generate a page, that difference accumulates to over a second of wait time.
  • Compliance: With the Datatilsynet becoming increasingly aggressive about GDPR and the Privacy Shield framework looking shaky (legal experts are already warning about the upcoming CJEU rulings), keeping Personally Identifiable Information (PII) on Norwegian soil is a massive risk mitigation.

The Glue: Terraform 0.12 and WireGuard

In the past, connecting these environments was a nightmare of IPsec VPNs that dropped packets if you looked at them wrong. But with Linux Kernel 5.6 (released last month, March 2020), WireGuard is now in the mainline kernel. This changes everything. It is faster, leaner, and easier to automate than OpenVPN.

Here is how we orchestrate a secure tunnel between an AWS auto-scaling group and a static backend on CoolVDS using Terraform.

1. The Terraform Setup

We use Terraform to provision the state. Note that we aren't using complex provisioners; we stick to the basics for stability.

# main.tf (Terraform 0.12 syntax)

provider "aws" {
  region = "eu-central-1"
}

# The Stateless Frontend (AWS)
resource "aws_instance" "frontend" {
  ami           = "ami-0c55b159cbfafe1f0" # Ubuntu 18.04 LTS
  instance_type = "t3.micro"
  
  user_data = <<-EOF
              #!/bin/bash
              apt-get update && apt-get install -y wireguard
              # Script to pull config from secrets manager would go here
              EOF
  
  tags = {
    Name = "Frontend-Node"
  }
}

While AWS handles the frontend, your backend resides on a CoolVDS NVMe instance. We choose CoolVDS here specifically for the KVM virtualization. Unlike OpenVZ, KVM allows us to modify kernel modules if necessary and ensures no "noisy neighbor" steals our I/O operations per second (IOPS).

2. The WireGuard Bridge

Latency is the enemy. WireGuard runs in kernel space, offering lower latency than userspace VPNs. Here is the production-ready configuration for the CoolVDS side (the "Server").

File: /etc/wireguard/wg0.conf on the CoolVDS instance.

[Interface]
# The internal IP of the CoolVDS node within the VPN mesh
Address = 10.0.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = 

[Peer]
# The AWS Client
PublicKey = 
AllowedIPs = 10.0.0.2/32
Pro Tip: Ensure you adjust the MTU. The default 1500 often causes fragmentation over public internet links. Setting MTU = 1360 in the [Interface] block usually fixes mysterious packet drops between providers.

Database Performance: The MySQL 8 Bottleneck

Running a database across a hybrid cloud requires tuning. Since your application is remote (AWS) relative to the database (CoolVDS), you must optimize the MySQL configuration to reduce round-trips. We aren't just "tuning"; we are aligning the buffer pool with the physical NVMe capabilities.

On your CoolVDS instance (assuming a 16GB RAM plan), your /etc/mysql/my.cnf needs to look like this:

[mysqld]
# Allocate 70-80% of RAM to the pool on a dedicated DB server
innodb_buffer_pool_size = 12G

# Essential for SSD/NVMe storage
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_neighbors = 0

# Network tuning for hybrid connection
max_allowed_packet = 64M
skip_name_resolve = 1

The innodb_flush_neighbors = 0 setting is critical. On traditional spinning rust (HDD), flushing neighbors helped sequential I/O. On the high-speed NVMe drives CoolVDS uses, this logic actually slows you down. Turn it off.

Data Sovereignty and The "Norgesskyen" Concept

We need to talk about risk. While the EU-US Privacy Shield is technically valid today, the scrutiny is intense. Many Norwegian organizations are adopting a "Norgesskyen" (Norwegian Cloud) approach.

By keeping your database on CoolVDS in Oslo, you ensure:

  1. Legal buffer: Your customer data physically resides in Norway.
  2. Performance consistency: You are not subject to the "credits" system of AWS T3 instances for your storage I/O. You get dedicated throughput.

Testing the Link

Before you commit to this architecture, benchmark it. Do not guess. Install iperf3 on both ends (the AWS instance and the CoolVDS instance).

# On CoolVDS (Server mode)
iperf3 -s

# On AWS (Client mode)
iperf3 -c 

If you aren't seeing near-line speed minus the encryption overhead, check your routing tables. But generally, the connection from AWS eu-central-1 to Oslo is robust enough for web-tier communication, provided the heavy lifting stays local.

Conclusion

Multi-cloud isn't about complexity for the sake of it. It's about arbitrage. You are arbitraging the cheap, elastic compute of the giants against the reliable, compliant, and low-latency storage of local experts like CoolVDS.

The tools are finally ready. Terraform 0.12 cleaned up the HCL syntax, and WireGuard solved the connectivity headache. There is no excuse for slow page loads in Norway anymore.

Next Step: Don't just take my word for the I/O benchmarks. Spin up a KVM instance on CoolVDS today, run a fio test against your current cloud provider, and look at the latency numbers yourself. Your database will thank you.