Console Login

Escaping the Hyperscaler Trap: A Pragmatic Multi-Cloud Strategy for Nordic Enterprises

The Myth of the Single Cloud

It starts innocently enough. You spin up an EC2 instance, maybe an RDS database. It’s convenient. But fast forward eighteen months, and your monthly invoice looks like the GDP of a small nation, and your latency to Oslo users is hovering around 40ms because your traffic is hairpinning through Frankfurt or Ireland. As we settle into Q2 2020, the narrative that "everything must go to the public cloud" is showing cracks. For Norwegian businesses, the sweet spot isn't all-in on hyperscalers—it's Hybrid Multi-Cloud.

I recently audited a media streaming startup based in Bergen. They were serving static assets, compute, and database queries entirely from a US-provider's Stockholm region. Their egress fees were bleeding them dry. By shifting their compute-heavy core and database to CoolVDS bare-metal performance instances while keeping S3 for cold storage, we cut their monthly OpEx by 45% and dropped latency to Norwegian ISPs by 12ms. Here is how we did it, and how you can architect the same resilience.

1. The Architecture: Compute Locally, Store Globally

The pragmatic approach minimizes egress fees. Public clouds charge exorbitantly for data leaving their network. Dedicated VPS providers like CoolVDS generally offer generous bandwidth pools. The strategy is simple: Heavy Compute & Database in Norway (CoolVDS) + Object Storage/CDN (Global).

To manage this disparate infrastructure without losing your mind, we use Terraform. In 2020, Infrastructure as Code isn't optional. Below is a simplified main.tf ensuring we can spin up resources across providers simultaneously.

# main.tf - Hybrid Provider Setup

provider "aws" {
  region = "eu-north-1" # Stockholm
}

# Using a generic provider or local-exec for CoolVDS (custom integration)
resource "null_resource" "coolvds_compute_node" {
  provisioner "local-exec" {
    command = "./coolvds-cli create-instance --plan='nvme-pro-X2' --region='no-osl1' --image='ubuntu-18-04'"
  }
}

resource "aws_s3_bucket" "static_assets" {
  bucket = "nordic-media-assets-2020"
  acl    = "private"
  
  tags = {
    Environment = "Production"
    DataClass   = "Public"
  }
}

This setup ensures that your heavy lifting—the PHP processing, the Python workers, the MySQL transaction handling—happens on hardware you control closer to the metal, while strictly static assets sit on S3.

2. The Connectivity Bridge: WireGuard Kernel Integration

Connecting a VPS in Oslo to an AWS VPC in Stockholm requires a tunnel. Historically, we used IPsec (strongSwan) or OpenVPN. Both are heavy. However, with the release of Linux Kernel 5.6 just last month (March 2020), WireGuard is finally mainline. It is leaner, faster, and easier to audit than IPsec.

If you are running Ubuntu 18.04 LTS (the current production standard), you can install the DKMS module. Here is how we configure a secure, low-latency bridge between your CoolVDS instance and your cloud VPC.

Step 1: Install WireGuard

sudo add-apt-repository ppa:wireguard/wireguard sudo apt-get update sudo apt-get install wireguard

Step 2: Generate Keys

wg genkey | tee privatekey | wg pubkey > publickey

Step 3: Interface Configuration (CoolVDS Side)

The configuration below sets up the tunnel. Note the MTU; we lower it slightly to account for encapsulation overhead, which is critical for preventing packet fragmentation over the public internet.

# /etc/wireguard/wg0.conf

[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [YOUR_COOLVDS_PRIVATE_KEY]

[Peer]
# The AWS/Cloud Peer
PublicKey = [CLOUD_PEER_PUBLIC_KEY]
AllowedIPs = 10.100.0.2/32
Endpoint = 52.x.x.x:51820
PersistentKeepalive = 25

Bring it up with:

sudo wg-quick up wg0

With this, you have a private, encrypted LAN spanning providers. Latency overhead with WireGuard is negligible compared to OpenVPN, crucial for database replication scenarios.

3. Data Sovereignty & The Legal Landscape

We cannot ignore the elephant in the room: GDPR and the Norwegian Datatilsynet. While the Privacy Shield framework currently allows data transfer to the US, many legal experts in the EU are signaling that this might not hold forever. Relying solely on US-owned hyperscalers for storing PII (Personally Identifiable Information) is a risk.

Pro Tip: Keep your database on CoolVDS in Norway. This ensures the "Master" copy of your customer data resides physically within Norwegian borders, subject to Norwegian law, decoupling your compliance risk from US foreign intelligence surveillance acts.

4. Performance: NVMe vs Network Block Storage

Hyperscalers use Network Attached Storage (EBS, Managed Disks). Even with "Provisioned IOPS," you are hitting a network layer before you hit the disk. CoolVDS uses local NVMe storage. The I/O difference is stark.

Let's look at a simple fio benchmark I ran on an ext4 partition last week.

Metric (4k Random Write) Typical Cloud (GP2) CoolVDS (Local NVMe)
IOPS ~3,000 (Burstable) ~65,000 (Sustained)
Latency 1-3 ms 0.05 ms
Cost Impact $$$ (Pay per IO/GB) Included

For high-transaction databases (MySQL/PostgreSQL), local NVMe is king. You don't need to over-provision capacity just to get decent IOPS.

5. Traffic Routing with HAProxy

To unify this, you need a smart load balancer. We deploy HAProxy on the edge (CoolVDS) to route traffic. It decides whether to serve from the local cache/backend or fetch from the cloud object storage.

# /etc/haproxy/haproxy.cfg

global
    log /dev/log    local0
    log /dev/log    local1 notice
    maxconn 4096
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend http_front
    bind *:80
    # ACL for static assets
    acl url_static path_end .jpg .gif .png .css .js
    
    # Route static to S3/Cloud, dynamic to local backend
    use_backend static_cloud if url_static
    default_backend local_app_nodes

backend local_app_nodes
    balance roundrobin
    server app1 127.0.0.1:8080 check

backend static_cloud
    http-request set-header Host nordic-media-assets.s3.amazonaws.com
    server s3 nordic-media-assets.s3.amazonaws.com:80 check ssl verify none

This configuration gives you the best of both worlds: infinite storage scalability for images, and raw, low-latency compute power for your application logic.

The Bottom Line

In 2020, "Cloud" is not a place; it's an operating model. But that doesn't mean you have to rent expensive, high-latency VMs from a provider thousands of kilometers away. By using CoolVDS as your high-performance, compliant hub in Norway, and connecting it to public clouds only where necessary, you achieve lower TCO, better GDPR posture, and superior performance.

Stop paying for "Provisioned IOPS" that don't deliver. Check your latency to NIX. If it's over 10ms, you are losing users.

Ready to optimize your infrastructure? Deploy a high-performance NVMe instance on CoolVDS today and reclaim your data sovereignty.