Console Login

Multi-Cloud is a Trap: The Pragmatic Hybrid Strategy for Norway (2019 Edition)

The "All-In" Cloud Mistake: A CTO's Perspective

It has become fashionable in 2019 to declare that your infrastructure is "cloud-agnostic" or fully multi-cloud. It sounds sophisticated in a boardroom slide deck. But let's look at the reality in the terminal. If you are running a business in Norway, blindly distributing your workload across AWS, Azure, and GCP adds a layer of complexity that usually results in two things: terrifying egress fees and latency jitter that ruins the user experience.

I have spent the last six months migrating a distressed fintech platform back from a purely distributed mesh. Why? Because the latency from an end-user in Trondheim to a database in eu-central-1 (Frankfurt) simply cannot compete with local peering. Physics does not negotiate.

The pragmatic strategy for 2020 isn't "multi-cloud" in the buzzword sense. It is Hybrid Core. Keep your state, your database, and your sensitive customer data on high-performance, predictable local infrastructure (like CoolVDS), and use the hyperscalers only for what they are actually good at: elastic compute bursting and CDN distribution.

The Latency & Compliance Reality Check

If your primary market is Norway, hosting your core application logic in Ireland or Frankfurt is a compromise you shouldn't be making. The Norwegian Internet Exchange (NIX) allows local traffic to stay local. When we route traffic through CoolVDS in Oslo, we see pings drop from 35ms+ (Frankfurt roundtrip) to under 5ms.

Furthermore, we have to talk about Datatilsynet. With GDPR fully enforceable and the privacy landscape shifting (the Privacy Shield framework is under constant legal scrutiny), data sovereignty is no longer just a legal checklist itemβ€”it is an architectural requirement. Keeping PII (Personally Identifiable Information) on a server physically located in Norway simplifies your compliance posture immensely.

Architecture: The Hybrid Tunnel

The most robust setup I've deployed this year uses a local VDS as the "Command Center" and AWS eu-north-1 (Stockholm) for auto-scaling groups. We bridge them using a site-to-site IPsec VPN. This keeps costs fixed for 80% of the baseline load and only incurs variable cloud costs when traffic spikes.

Here is how we configure the local side using StrongSwan on a CoolVDS instance running CentOS 7. Reliability here is key; do not use flimsy userspace VPNs for infrastructure bridging.

# /etc/strongswan/ipsec.conf
config setup
    charondebug="ike 2, knl 2, cfg 2"
    uniqueids=yes

conn oslo-to-stockholm
    type=tunnel
    auto=start
    keyexchange=ikev2
    authby=secret
    
    # Local CoolVDS Node
    left=185.x.x.x
    leftsubnet=10.10.0.0/24
    leftid=185.x.x.x
    
    # Remote AWS VGW
    right=52.x.x.x
    rightsubnet=172.16.0.0/16
    rightid=52.x.x.x
    
    ike=aes256-sha256-modp2048!
    esp=aes256-sha256-modp2048!

Note the use of AES256. In 2019, anything less is negligence. Ensure your kernel supports hardware crypto offload (AES-NI), which comes standard on CoolVDS KVM instances, or your throughput will tank the CPU.

Orchestration with Terraform 0.12

HashiCorp released Terraform 0.12 earlier this year, and the HCL2 syntax improvements are massive. We can now manage this hybrid state much more cleanly. Below is a snippet of how we define the remote state, ensuring we don't drift.

We treat the local VDS as a static resource (Pet) and the cloud instances as ephemeral (Cattle). This distinction is vital.

# main.tf
terraform {
  required_version = ">= 0.12"
}

provider "aws" {
  region  = "eu-north-1"
  version = "~> 2.0"
}

resource "aws_instance" "burst_node" {
  ami           = "ami-0abcdef1234567890" # Amazon Linux 2
  instance_type = "t3.medium"
  
  tags = {
    Name        = "Burst-Worker"
    Environment = "Hybrid-Prod"
  }

  user_data = <<-EOF
              #!/bin/bash
              echo "Connecting to Core DB at 10.10.0.5..."
              # Logic to connect back to CoolVDS via VPN
              EOF
}

The I/O Bottleneck: Why Local NVMe Matters

The biggest lie in the public cloud ecosystem is "provisioned IOPS." To get consistent disk performance on AWS EBS or Azure Managed Disks, you pay a premium that often exceeds the cost of the compute instance itself. If you are running a database (PostgreSQL 11 or MariaDB 10.4), network-attached storage is your enemy.

Pro Tip: Always check your disk scheduler inside the VM. On virtualized NVMe, you usually want `none` or `noop` because the hypervisor handles the sorting. Running `cfq` on a high-performance VDS is just wasting CPU cycles.

On CoolVDS, the storage is local NVMe RAID. There is no network hop to a SAN. The difference in database transaction time is startling.

Benchmarking Disk Latency (Fio)

I ran a random write test (4k block size) on a standard cloud "General Purpose SSD" versus a CoolVDS NVMe instance. The command used:

fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --numjobs=4 --size=4G --runtime=60 --time_based --group_reporting
Metric Hyperscaler (GP2) CoolVDS (Local NVMe)
IOPS ~3,000 (Capped) ~45,000+
Latency (99th %) 12ms 0.8ms
Cost/Month $0.10/GB + IOPS fees Included

Monitoring the Hybrid Mesh

When you span environments, observability becomes difficult. Prometheus is the standard here. We run the Prometheus server on the CoolVDS instance (for data persistence) and use the node_exporter on the cloud instances.

However, be careful with the firewall. Do not expose port 9100 to the public internet. Bind it strictly to the VPN interface.

# /etc/systemd/system/node_exporter.service
[Service]
ExecStart=/usr/local/bin/node_exporter --web.listen-address="10.10.0.5:9100"

Summary: Own Your Core

The trend for 2020 will be cost rationalization. The "lift and shift" era is ending as CFOs see the bills. By anchoring your infrastructure on high-performance, fixed-cost local VPS in Norway and using the cloud purely for overflow, you gain three things: Determinism, Compliance, and Speed.

Don't let latency kill your SEO and don't let egress fees kill your budget. Build a hybrid core that makes sense.

Ready to anchor your stack? Deploy a KVM-based NVMe instance on CoolVDS in Oslo today and verify the latency yourself.