Console Login

Multi-Cloud Architecture Guide: Escaping Vendor Lock-in While Keeping Data in Norway (2020 Edition)

Multi-Cloud Architecture Guide: Escaping Vendor Lock-in While Keeping Data in Norway

Let’s be honest: putting all your eggs in one hyperscaler's basket is a strategic error. I recall a meeting last November with a CTO in Oslo whose entire e-commerce platform evaporated for six hours because a single availability zone in Frankfurt had a networking cascade. His "99.99% SLA" refund amounted to $43.50. The lost revenue? Nearly 400,000 NOK.

This is the reality of centralized cloud dependency. As we kick off 2020, the conversation has shifted from "moving to the cloud" to "surviving the cloud." For Norwegian businesses, the challenge is twofold: maintaining strict data sovereignty under GDPR while achieving the redundancy that only a multi-provider strategy can offer. This guide details how to architect a hybrid setup that leverages the raw performance of local infrastructure alongside the scale of global providers.

The "Core & Burst" Strategy

The most robust pattern I've deployed for clients in the Nordics is the "Core & Burst" model. Instead of mirroring your entire stack across AWS, Azure, and Google (which is prohibitively expensive and technically complex), you segment your architecture based on data gravity and latency sensitivity.

  • The Core (Norwegian VPS): Your database, stateful applications, and customer PII reside here. This ensures data stays within Norwegian borders, simplifying compliance with Datatilsynet regulations.
  • The Burst (Global Cloud): Stateless frontend nodes, CDNs, and heavy compute jobs (like video transcoding) run here, closer to global users if you have an international audience.

Why Latency Matters: The Oslo Context

Physics is stubborn. If your primary customer base is in Norway, hosting your database in Ireland or Frankfurt introduces a round-trip time (RTT) of 30-45ms. Hosting in Oslo via a provider peering at NIX (Norwegian Internet Exchange) drops that to under 5ms.

Pro Tip: For high-transaction databases (Magento, WooCommerce, financial apps), every millisecond of latency locks a PHP worker or thread. Moving your DB from Frankfurt to a CoolVDS NVMe instance in Oslo can effectively double your throughput without changing a line of code.

Technical Implementation: Orchestrating with Terraform

Managing multiple providers manually is a recipe for drift. We use Terraform (version 0.12+) to abstract the underlying API differences. Below is a simplified structure of how we define a "multi-cloud" environment where CoolVDS holds the state and a secondary provider handles stateless scaling.

First, we define our backend configuration to ensure our state file is locked and shared safely:

# main.tf
terraform {
  required_version = ">= 0.12"
  backend "s3" {
    bucket = "company-terraform-state"
    key    = "prod/infrastructure.tfstate"
    region = "eu-central-1"
  }
}

Next, we provision the Core Database node on CoolVDS using a generic OpenStack or KVM-compatible provider definition (since CoolVDS runs on standard KVM, it plays nicely with standard tools):

resource "openstack_compute_instance_v2" "core_db" {
  name            = "norway-db-master-01"
  image_name      = "Ubuntu 18.04"
  flavor_name     = "vds-nvme-32gb" # High RAM for buffer pool
  key_pair        = "deploy-key-2020"
  security_groups = ["db-secure"]

  network {
    name = "private-net"
  }

  user_data = file("scripts/init-mariadb.sh")
}

Data Consistency: Master-Slave Across Providers

The hardest part of multi-cloud is the data layer. You cannot easily write to two clouds simultaneously without dealing with CAP theorem nightmares. The pragmatic solution for 2020 is an Active-Passive setup with asynchronous replication.

We configure the Master DB on CoolVDS (benefiting from local NVMe I/O speeds) and stream binary logs to a Slave in a secondary cloud for disaster recovery.

Configuration: my.cnf

On the Master (CoolVDS), we ensure GTID (Global Transaction ID) is enabled for crash-safe replication. This is critical if the link between providers goes down temporarily.

[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
max_binlog_size = 100M

# GTID for robust replication
gtid_mode = ON
enforce_gtid_consistency = ON

# Performance Tuning for NVMe
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000 # CoolVDS NVMe handles this easily
innodb_buffer_pool_size = 24G # 75% of RAM

On the Slave node, we configure the connection. Note that we use SSL; never replicate database traffic over the public internet without encryption.

CHANGE MASTER TO 
  MASTER_HOST='185.x.x.x', 
  MASTER_USER='repl_user', 
  MASTER_PASSWORD='SecurePassword!2020', 
  MASTER_SSL=1, 
  MASTER_AUTO_POSITION=1;
START SLAVE;

Networking: The WireGuard Revolution

Traditionally, connecting two clouds required clunky IPsec tunnels or OpenVPN, which are heavy on CPU and tricky to configure. While WireGuard is still technically approaching its 1.0 release (slated for later this year), we have been running it in production on Ubuntu 18.04 using the PPA with incredible success. It offers lower latency and faster handshake times than IPsec.

Here is a benchmark comparison we ran between Oslo (CoolVDS) and a provider in Amsterdam:

Protocol Throughput (Mbps) CPU Usage (1 Core) Ping (Avg)
OpenVPN (Tun) 340 Mbps 85% 22ms
IPsec (StrongSwan) 580 Mbps 40% 19ms
WireGuard 890 Mbps 15% 18ms

Using WireGuard allows us to create a private mesh network where the application servers in the cloud can talk to the database on CoolVDS securely, without the overhead bottlenecking high-traffic requests.

The Compliance & Cost Argument

Beyond technology, there is the legal reality. With the uncertainty surrounding the Privacy Shield framework and the aggressive stance of Datatilsynet regarding data transfers to the US, keeping your encryption keys and database storage on Norwegian soil is the safest insurance policy.

Furthermore, cloud egress fees are a silent killer. Hyperscalers charge exorbitant rates for data leaving their network. By keeping your data heavy-lifting on CoolVDS (which offers generous bandwidth packages) and only pushing optimized content out to the edge, you significantly reduce TCO (Total Cost of Ownership).

Summary of the CoolVDS Advantage

We don't claim to replace AWS for lambda functions or AI modeling. But for the foundational layer—the servers that must stay up, the disks that must differ I/O instantly, and the data that must remain compliant—CoolVDS is the architectural anchor.

  • Hardware: We utilize enterprise NVMe drives, not standard SSDs. The difference in IOPS is noticeable immediately on database commits.
  • Virtualization: We use KVM. No "noisy neighbors" stealing your CPU cycles like in container-based VPS solutions.
  • Location: Physically located in Norway, subject to Norwegian law.

Don't wait for the next major outage or legal ruling to rethink your infrastructure. Diversity is strength.

Ready to secure your core infrastructure? Deploy a KVM-based, NVMe-powered instance on CoolVDS today and build a foundation that lasts.