Console Login

Escaping the Vendor Trap: A Pragmatic CTO’s Guide to Multi-Cloud Architecture in 2019

The Cloud Is Not a Destination, It’s a Capability

We need to talk about the Google Cloud outage from June. If you were managing infrastructure last month, you likely felt the shockwaves when a massive network congestion event in the US took down services like Shopify, Snapchat, and Discord for over four hours. It was a wake-up call for every CTO in Europe: if your entire business logic resides in a single vendor's ecosystem, you do not have a disaster recovery plan. You have a hope and a prayer.

As a CTO operating in the Nordic market, I see too many teams mistaking "going to the cloud" for "going to AWS/Azure" and staying there. That is not a strategy; that is capitulation to vendor lock-in. The smart play in 2019 is Multi-Cloud—specifically, a Hybrid model. By leveraging the raw I/O performance of specialized providers like CoolVDS for your data gravity layer and using hyperscalers for compute bursting, you gain resilience, lower your Total Cost of Ownership (TCO), and navigate the minefield of GDPR much more effectively.

The "Data Gravity" Problem

The biggest lie in the industry is that "storage is cheap." Storage space is cheap. Storage performance (IOPS) and data movement (egress) are expensive. If you host a write-heavy database on a standard public cloud instance, you are often paying a premium for provisioned IOPS that still struggle with latency spikes.

In a hybrid setup, we place the database on bare-metal or high-performance KVM instances (like CoolVDS NVMe plans) located physically closer to the customer base—here in Norway. This ensures three things:

  1. Latency: You are hitting the Norwegian Internet Exchange (NIX) directly. Ping times from Oslo to Frankfurt (AWS) are decent (~25ms), but Oslo to Oslo is instant (<2ms).
  2. Cost: You aren't paying $0.09/GB for egress traffic every time a user downloads a file.
  3. Compliance: Data rests on servers under Norwegian jurisdiction, satisfying strict interpretations of Datatilsynet regulations.

Architecture: The Hybrid Control Plane

How do we orchestrate this? In 2019, Terraform 0.12 is the standard. We stop scripting ad-hoc API calls and start defining infrastructure as code (IaC). Below is a simplified strategy: we use a local high-performance node for our primary database and a cloud provider for auto-scaling stateless web workers.

1. Infrastructure as Code (Terraform)

This snippet demonstrates how you might define resources across two providers in a single `main.tf`. Note the use of the new HCL2 syntax introduced in version 0.12.

variable "coolvds_token" {}variable "aws_region" { default = "eu-central-1" }# Local High-Performance DB Node (CoolVDS)provider "openstack" {  user_name   = "admin"  tenant_name = "production"  password    = var.coolvds_token  auth_url    = "https://api.coolvds.com/v3"}resource "openstack_compute_instance_v2" "db_primary" {  name            = "pg-master-oslo-01"  image_name      = "Ubuntu 18.04"  flavor_name     = "nvme.4cpu.16gb"  key_pair        = "deployer-key"  security_groups = ["db-secure"]    network {    name = "private-net"  }}# Burst Compute Nodes (AWS)provider "aws" {  region = var.aws_region}resource "aws_instance" "web_worker" {  count         = 3  ami           = "ami-0abc123456789" # Ubuntu 18.04  instance_type = "t3.medium"  tags = {    Name = "web-stateless-${count.index}"  }}

Networking: The Glue That Holds It Together

Running a database in Oslo and web workers in Frankfurt requires a rock-solid secure tunnel. In 2019, while WireGuard is making waves, the enterprise standard remains IPsec or OpenVPN for production stability. Latency becomes your enemy here.

Here is a real-world look at latency. I ran an `mtr` (My Traceroute) report from a CoolVDS instance in Oslo to an AWS instance in Frankfurt.

$ mtr --report --cycles 10 ec2-3-120-xx-xx.eu-central-1.compute.amazonaws.comHOST: coolvds-oslo-node-04      Loss%   Snt   Last   Avg  Best  Wrst StDev  1.|-- gw.coolvds.local         0.0%    10    0.3   0.4   0.3   0.6   0.1  2.|-- nix-peering.vlan20.no    0.0%    10    1.2   1.4   1.1   2.8   0.5  3.|-- ae1.oslo1.no.tdc.net     0.0%    10    1.8   1.9   1.8   2.1   0.1  4.|-- hamburg-gw.amazon.com    0.0%    10   18.4  18.6  18.2  19.1   0.3  5.|-- frankfurt-aws-edge       0.0%    10   24.1  24.3  24.0  25.2   0.4

24ms is acceptable for web-to-db communication if you optimize your queries. However, for real-time applications, you want your application servers sitting right next to your database.

Pro Tip: Use HAProxy as an ingress gatekeeper. It allows you to route traffic based on health checks. If your local CoolVDS web tier (primary) is overwhelmed, you spill over to the cloud.

2. Intelligent Load Balancing Config

This `haproxy.cfg` setup prioritizes local traffic (weight 100) and only uses the cloud instances (weight 10) when necessary or for specific paths.

global    log /dev/log    local0    maxconn 2000    user haproxy    group haproxydefaults    log     global    mode    http    option  httplog    timeout connect 5000ms    timeout client  50000ms    timeout server  50000msbackend web_nodes    balance roundrobin    option httpchk GET /health    # Primary: Local CoolVDS Nodes (Low Latency, No Egress Fees)    server local_web_01 10.10.0.5:80 check weight 100    server local_web_02 10.10.0.6:80 check weight 100    # Backup: AWS Instances (Higher Latency, Costly Egress)    server cloud_web_01 172.16.0.4:80 check weight 10 backup    server cloud_web_02 172.16.0.5:80 check weight 10 backup

The GDPR Elephant in the Room

We cannot ignore the legal landscape. The EU-US Privacy Shield is currently in effect, but it is under constant scrutiny by privacy advocates. Placing your primary user database (PII) on US-owned infrastructure—even if the region is "eu-central-1"—carries a theoretical risk regarding the CLOUD Act.

By hosting your core database on CoolVDS in Norway, you are adding a layer of sovereignty. You can encrypt the data at rest and only send anonymized or transient data to the public cloud for processing. This architectural decision satisfies the "Pragmatic CTO" requirement: maximize performance while minimizing legal exposure.

Comparison: Hyperscaler vs. CoolVDS

FeatureMajor Public Cloud (AWS/GCP)CoolVDS (High-Perf VPS)
Storage I/OThrottled (Pay for IOPS)Unmetered NVMe
Bandwidth CostHigh ($0.09/GB+)Included / Low Cost
Latency to Oslo~20-30ms<2ms
Support TierPaid Premium SupportDirect Admin Access

Conclusion: Own Your Core

Multi-cloud in 2019 isn't about complexity; it's about leverage. Use the giants for what they are good at: infinite, elastic compute capacity for temporary spikes. Use specialized providers like CoolVDS for what we are good at: massive I/O performance, data sovereignty, and rock-solid stability in the Nordics.

Do not let your infrastructure budget be eaten alive by egress fees or your uptime be dictated by a fiber cut in Frankfurt. Diversify your stack.

Is your current setup compliant and performant? Spin up a CoolVDS NVMe instance today and benchmark your database performance against your current cloud provider. The results usually speak for themselves.