Escaping the Vendor Trap: A Pragmatic Multi-Cloud Strategy for Norwegian Enterprises
Letβs be honest: the "all-in" public cloud dream is starting to crack. Two years ago, migrating everything to AWS or Azure was the standard advice. Today, I see CFOs hyperventilating over egress fees and CTOs losing sleep over data sovereignty. If you are hosting critical data for Norwegian customers solely in a Frankfurt availability zone, you are betting your compliance posture on legal frameworks that are becoming shakier by the month.
I recently audited a setup for a customized e-commerce platform based in Oslo. They were 100% AWS. Their monthly bill fluctuated wildy based on traffic spikes, but the real pain was latency and data residency concerns raised by their legal team regarding GDPR. The solution wasn't to abandon the cloud, but to adopt a Hybrid Multi-Cloud architecture. We moved the core database and consistent workloads to high-performance local infrastructure (CoolVDS) and kept the front-end auto-scaling groups on AWS. The result? A 40% drop in TCO and guaranteed data residence in Norway.
The Architecture: Core vs. Burst
The philosophy is simple. Public cloud is for elasticity. Private VPS/Bare Metal is for performance, predictability, and compliance. By treating CoolVDS instances as your "Core" infrastructure, you anchor your data in Norway, utilizing the NIX (Norwegian Internet Exchange) for minimal latency to local users.
Step 1: Infrastructure as Code with Terraform 0.12
With the release of Terraform 0.12 last month, handling complex variable structures became significantly easier. We don't want to manually click through dashboards. We define our local "Core" and our public "Burst" layers in code.
Here is how we abstract the provider logic. We provision the stable database node on CoolVDS (using the KVM driver or generic OpenStack provider if available, or simply bootstrapping via remote-exec) and the compute nodes on AWS.
# terraform.tfvars
coolvds_ip = "185.x.x.x"
aws_region = "eu-central-1"
# main.tf
resource "null_resource" "coolvds_db_node" {
connection {
type = "ssh"
user = "root"
host = var.coolvds_ip
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"apt-get update",
"apt-get install -y postgresql-11"
]
}
}
resource "aws_instance" "app_node" {
ami = "ami-0c55b159cbfafe1f0" # Ubuntu 18.04 LTS
instance_type = "t3.medium"
tags = {
Name = "Burst-Worker"
}
}
Pro Tip: Don't just rely on default instance types. On CoolVDS, our standard KVM slices include NVMe storage. To match that I/O performance on AWS, you'd need provisioned IOPS (io1) volumes, which cost a fortune. Compare the benchmarks before you commit.
Step 2: The Network Bridge (Site-to-Site VPN)
A multi-cloud setup is useless if the connection between your Norwegian core and your European cloud workers is insecure or slow. Dedicated fibers are expensive. For most use cases in 2019, a well-tuned IPsec tunnel using StrongSwan is the sweet spot.
We configure the CoolVDS instance as the VPN gateway. This ensures traffic between your database (in Norway) and your app servers (in Frankfurt) is encrypted.
Configuring StrongSwan on Debian/Ubuntu
First, install the daemon:
sudo apt-get install strongswan strongswan-pki
Next, edit /etc/ipsec.conf. We use IKEv2 for better stability and faster re-keying compared to IKEv1.
# /etc/ipsec.conf
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn coolvds-to-aws
type=tunnel
auto=start
keyexchange=ikev2
authby=secret
# CoolVDS (Left - Local)
left=185.x.x.x # Your CoolVDS Public IP
leftsubnet=10.10.1.0/24 # Local Private Network
leftid=185.x.x.x
# AWS (Right - Remote)
right=3.120.x.x # AWS VPN Gateway / Instance IP
rightsubnet=172.31.0.0/16 # AWS VPC CIDR
rightid=3.120.x.x
ike=aes256-sha1-modp1024!
esp=aes256-sha1!
Ensure you enable IP forwarding in your kernel settings, or the packets will drop silently at the interface.
echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Latency Check: Ping times from Oslo to AWS Frankfurt usually hover around 15-20ms. If you see spikes above 40ms, check your routing path (MTR) or verify that you aren't being throttled by a "noisy neighbor" on the public cloud side. CoolVDS guarantees dedicated CPU resources, so jitter on the local side is rarely the issue.
Step 3: Data Sovereignty and Replication
GDPR compliance isn't just a checkbox; it's an architectural constraint. By keeping the Master Database on a server physically located in Norway, you satisfy the strictest interpretations of data residency. You can use PostgreSQL streaming replication to send read-only copies to the cloud if needed, but writes should happen at home.
Configure your pg_hba.conf to only accept connections from the VPN subnet:
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# Allow connection from AWS VPC via VPN
host all all 172.31.0.0/16 md5
Why This Matters for the Bottom Line
Cloud bills are often opaque. You pay for compute, storage, egress, load balancers, and NAT gateways. By anchoring your architecture on a fixed-cost Virtual Dedicated Server:
| Feature | Public Cloud (e.g., AWS t3.large) | CoolVDS (KVM NVMe) |
|---|---|---|
| Cost Model | Variable (Pay-per-hour + Traffic) | Fixed Monthly |
| Storage I/O | Expensive (Provisioned IOPS) | Included (NVMe Local) |
| Data Location | Frankfurt/Ireland/Stockholm | Oslo (Norway) |
This hybrid approach gives you the compliance safety net of on-premise hardware without the capital expenditure of buying racks. It gives you the burst capability of the cloud without the massive monthly retainer.
We are seeing more Kubernetes deployments (version 1.14 and the just-released 1.15) utilizing this topology: Control Plane and etcd on stable, secure CoolVDS nodes, with Worker Nodes spinning up in the cloud only during Black Friday traffic surges.
Next Steps
Complexity is the enemy of stability. Start small. Deploy a single NVMe instance on CoolVDS today, configure a VPN tunnel to your current cloud environment, and run a database benchmark. You will likely find that the latency is negligible, but the performance per Krone is significantly higher.
Ready to reclaim control of your infrastructure? Deploy your primary node on CoolVDS in under 55 seconds.