Escaping the Hyperscaler Trap: A Pragmatic Multi-Cloud Strategy for Norwegian Enterprises
There is a dangerous misconception in our industry that "going cloud" automatically means handing the keys to your infrastructure over to AWS, Azure, or GCP. For a CTO in Oslo or Bergen, this isn't just a strategic error; it is a financial and legal liability. I recently audited a SaaS platform serving the Nordic market that was routing 100% of its traffic through AWS eu-central-1 (Frankfurt). Their latency to end-users in Trondheim was averaging 28ms, and their data egress fees were effectively a second payroll.
The solution wasn't to abandon the cloud, but to stop treating it as a monolith. We moved their core database and persistent storage to a high-performance KVM environment in Oslo (CoolVDS), keeping only their elastic compute burst capabilities on AWS. The result? A 40% reduction in monthly TCO and a latency drop to 3ms for 90% of their user base. This is the Hybrid Core strategy.
The Compliance Headache: Schrems II and Datatilsynet
Since the Schrems II ruling, relying solely on US-owned hyperscalers—even with servers located in the EU—has become a gray area that makes legal teams sweat. The Datatilsynet (Norwegian Data Protection Authority) has been increasingly clear about the risks of transferring personal data to providers subject to US surveillance laws (FISA 702).
By anchoring your primary user database on Norwegian soil, hosted by a European entity like CoolVDS, you create a stronger defensive posture regarding GDPR. Your data rests here. It stays here. Hyperscalers become merely data processors for ephemeral tasks, not the vault holders.
Architecture: The "Hybrid Core" Model
The philosophy is simple: Stateful Locally, Stateless Globally.
- The Core (CoolVDS): PostgreSQL/MySQL databases, Redis caches, and sensitive customer data. Connected directly to NIX (Norwegian Internet Exchange).
- The Edge (Hyperscalers): S3 for non-sensitive public assets, Lambda for sporadic compute tasks, or global CDNs.
To make this work seamlessly, we need a mesh VPN. MPLS is too expensive and slow to provision. In 2024, the standard is WireGuard.
Inter-Cloud Connectivity with WireGuard
We don't trust public internet routing for database replication. We build an encrypted mesh. Here is a production-ready WireGuard configuration we use to link a CoolVDS instance in Oslo with an AWS VPC in Frankfurt.
On the Oslo Node (CoolVDS):
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
ListenPort = 51820
PrivateKey = <OSLO_PRIVATE_KEY>
# AWS Frankfurt Node Peer
[Peer]
PublicKey = <AWS_PUBLIC_KEY>
AllowedIPs = 10.100.0.2/32
Endpoint = aws-gw.example.com:51820
PersistentKeepalive = 25
On the Frankfurt Node (AWS EC2):
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.2/24
ListenPort = 51820
PrivateKey = <AWS_PRIVATE_KEY>
# CoolVDS Oslo Node Peer
[Peer]
PublicKey = <OSLO_PUBLIC_KEY>
AllowedIPs = 10.100.0.1/32
Endpoint = oslo-gw.coolvds.com:51820
PersistentKeepalive = 25
Pro Tip: Set the MTU to 1360 on your WireGuard interfaces if you are traversing networks with strict jumbo frame dropping. It prevents packet fragmentation issues that look like random connection drops.
Infrastructure as Code: Terraform State Isolation
Managing resources across two providers requires discipline. Do not mix your state files. If AWS goes down, you want to be able to apply changes to your CoolVDS infrastructure without Terraform hanging on an AWS API timeout.
We use a modular structure. The core module manages the immutable assets in Norway.
# main.tf structure
module "oslo_core" {
source = "./modules/coolvds_compute"
instance_count = 3
region = "no-oslo-1"
plan = "nvme-high-cpu"
# Tagging for cost allocation
tags = {
Environment = "Production"
DataResidency = "Norway"
}
}
module "aws_burst" {
source = "./modules/aws_autoscaling"
min_size = 0
max_size = 10
# Only spins up when Oslo load > 80%
}
Performance: The Latency Truth
Marketing brochures lie; ping does not. When your application logic requires multiple database round-trips per request, the speed of light becomes your enemy. A 20ms round trip to Frankfurt adds up fast. If a user request triggers 10 SQL queries sequentially, that is 200ms of waiting time purely due to physics.
Hosting the database on a CoolVDS NVMe instance in Oslo reduces that RTT (Round Trip Time) to approximately 1.5ms for local users. That same 10-query request now takes 15ms total network time.
| Route | Avg Latency (ms) | Jitter |
|---|---|---|
| Oslo Fiber -> AWS Frankfurt | 18-24ms | High |
| Oslo Fiber -> CoolVDS Oslo | 1-2ms | Low |
Monitoring the Split
In a hybrid setup, observability is critical. You cannot rely on CloudWatch alone. We deploy a standalone Prometheus instance on the Oslo core to scrape metrics from both sides via the WireGuard tunnel. This ensures that if the external internet connection to AWS fails, we still have metrics on our core infrastructure.
# prometheus.yml snippet
scrape_configs:
- job_name: 'oslo_core_db'
static_configs:
- targets: ['10.100.0.1:9100'] # Local CoolVDS IP
- job_name: 'aws_edge_nodes'
scrape_interval: 30s
static_configs:
- targets: ['10.100.0.2:9100'] # Remote AWS IP over WireGuard
The Economic Reality
Hyperscalers charge a premium for "managed services" that often mask simple configurations. A managed PostgreSQL instance on RDS can cost 2-3x more than running raw NVMe compute on CoolVDS, where you have full root access to tune postgresql.conf specifically for your workload (e.g., setting huge_pages = on and optimizing random_page_cost for our NVMe drives).
For Norwegian businesses, the choice is clear. Use the cloud for what it's good at—elasticity and global reach. But keep your data, your costs, and your latency grounded in reality. The Hybrid Core strategy isn't just about saving money; it's about regaining control.
Ready to build your local core? Deploy a high-performance NVMe instance on CoolVDS in Oslo today and drop your latency to single digits.