Console Login

Escaping Vendor Lock-in: A Pragmatic Multi-Cloud Architecture for 2020

Escaping Vendor Lock-in: A Pragmatic Multi-Cloud Architecture for 2020

There is a dangerous misconception circulating in boardroom meetings across Oslo right now: that migrating to the cloud simply means moving everything to AWS or Azure and calling it a day. As a CTO, I see this as replacing one form of technical debt with another—Vendor Lock-in.

If 2019 taught us anything, it is that even the hyperscalers bleed. We saw significant outages in major US-EAST regions that took down half the internet. Furthermore, with the US CLOUD Act looming over European businesses, relying solely on US-owned infrastructure is becoming a compliance nightmare for those of us answering to Datatilsynet.

The solution isn't to abandon the cloud. It is to adopt a Multi-Cloud Strategy that leverages the elasticity of hyperscalers while grounding your critical data in sovereign, high-performance infrastructure like CoolVDS. Here is how we architect for resilience, cost-efficiency, and compliance in May 2020.

The Compliance Trap: GDPR vs. The CLOUD Act

Before we touch a single line of code, we must address the legal reality. While the Privacy Shield framework is currently active, it is under immense scrutiny (specifically the Schrems II case pending judgment). The US CLOUD Act allows US federal law enforcement to compel US-based technology companies to provide requested data, regardless of whether that data is stored on servers in Germany, Norway, or the US.

For a Norwegian entity handling sensitive customer data, the safest architectural pattern is Data Sovereignty. Keep your stateless application logic in the auto-scaling groups of AWS/GCP, but keep your state (databases, customer records) on a provider owned and operated under Norwegian or strict European jurisdiction.

The Architecture: Hybrid Mesh

We are building a hybrid topology. We will use Terraform (v0.12) to orchestrate resources across two providers:

  1. Provider A (Hyperscaler): Frontend load balancers and stateless Kubernetes nodes.
  2. Provider B (CoolVDS - Norway): Primary Database (PostgreSQL) and Redis cache.

Why this split? IOPS cost. To get 20,000 IOPS on an AWS EBS volume, you pay a premium that often exceeds the cost of the compute instance itself. On CoolVDS, high-performance NVMe storage is standard. We get raw metal performance for the database without the "cloud tax."

Infrastructure as Code with Terraform 0.12

Managing two providers manually is a recipe for drift. We use Terraform. Below is a stripped-down main.tf demonstrating how to instantiate resources across disparate clouds using the new HCL syntax introduced in v0.12.

variable "region_aws" {
  default = "eu-north-1" # Stockholm is lowest latency to Oslo
}

provider "aws" {
  region = var.region_aws
}

# We use a generic provider or custom module for the VPS component
# In this scenario, we treat CoolVDS as a remote backend resource
resource "aws_instance" "frontend_node" {
  ami           = "ami-0abcdef1234567890"
  instance_type = "t3.medium"
  tags = {
    Name = "Stateless-App-Node"
  }
}

output "frontend_ip" {
  value = aws_instance.frontend_node.public_ip
}

Secure Interconnect: Enter WireGuard

Historically, connecting these clouds required heavy IPsec tunnels or sluggish OpenVPN configurations. However, with the release of Linux Kernel 5.6 just two months ago (March 2020), WireGuard is now in the mainline kernel. This is massive.

WireGuard offers lower latency and faster handshake times than IPsec, which is critical when your app server is in Stockholm (AWS) and your database is in Oslo (CoolVDS). The round-trip time (RTT) between these locations is typically 10-12ms via high-quality transit. A bloated VPN protocol can double that. WireGuard keeps it lean.

Here is a production-ready setup for Ubuntu 20.04 (Focal Fossa), which includes WireGuard tools in the repository.

1. The CoolVDS Node (Database Server)

This acts as the "server" peer due to its static IP.

# Install WireGuard
sudo apt update && sudo apt install wireguard -y

# Generate keys
wg genkey | tee privatekey | wg pubkey > publickey

# Create config /etc/wireguard/wg0.conf
[Interface]
Address = 10.0.0.1/24
SaveConfig = true
ListenPort = 51820
PrivateKey = 

[Peer]
PublicKey = 
AllowedIPs = 10.0.0.2/32

2. The Cloud Node (Frontend)

[Interface]
Address = 10.0.0.2/24
PrivateKey = 

[Peer]
PublicKey = 
Endpoint = 185.x.x.x:51820 # CoolVDS Static IP
AllowedIPs = 10.0.0.0/24
PersistentKeepalive = 25

Once the interface is up (wg-quick up wg0), your cloud frontend can query the database at 10.0.0.1 securely. The encryption overhead is negligible on modern CPUs.

Performance Verification: Don't Trust, Verify

We migrated a client recently who complained about "slow queries" on their previous RDS setup. The issue wasn't the query; it was I/O throttling. We moved their dataset to a CoolVDS instance with local NVMe.

We ran fio to benchmark random read performance, which correlates heavily with transactional database loads.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test \ 
  --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread
Metric Standard Cloud Block Storage (GP2) CoolVDS NVMe Local
IOPS ~3,000 (Capped) ~85,000+
Latency (95th percentile) 2.4ms 0.08ms

The difference is orders of magnitude. For a Magento or WooCommerce site, this translates directly to faster Time to First Byte (TTFB). Low latency to NIX (Norwegian Internet Exchange) ensures that your local customers see the content instantly.

Pro Tip: When configuring MySQL 8.0 on a dedicated NVMe VPS, ensure you adjust innodb_io_capacity to at least 10000. The default values assume spinning rust and will throttle your high-speed storage unnecessarily.

The Total Cost of Ownership (TCO)

Let's talk numbers. Egress fees (data transfer out) from major cloud providers are a hidden killer. If you host high-bandwidth assets (images, backups) on a hyperscaler, you pay per gigabyte transferred.

CoolVDS offers generous bandwidth allocations. By using the CoolVDS instance as an origin server for your static assets or backup repositories, you bypass the egress trap. You are renting the pipe, not paying a toll for every car that drives on it.

Conclusion

A multi-cloud strategy in 2020 isn't about complexity; it's about leverage. Use the giants for what they are good at—global reach and instant scaling. Use CoolVDS for what we are good at—unmatched I/O performance, data sovereignty in Norway, and predictable pricing.

Don't let your infrastructure be a black box. Take control of your latency and your data.

Ready to secure your data sovereignty? Deploy a CoolVDS NVMe instance in Oslo today and start building your hybrid mesh.