Multi-Cloud Reality Check: Architecting a Hybrid Core in Norway for 2023
Let's cut through the marketing noise. For most CTOs operating in Europe, "Multi-Cloud" isn't about achieving 99.9999% availability by mirroring your entire stack across AWS, Azure, and GCP. That is a bankruptcy strategy. Real, pragmatic multi-cloud in late 2022 is about arbitrage.
It's about leveraging Hyperscalers (AWS/Google) for what they are good at—elastic burst computing and managed AI services—while anchoring your heavy, predictable I/O and data storage on cost-effective, high-performance infrastructure. Specifically, infrastructure sitting under Norwegian jurisdiction.
I recently audited a SaaS platform serving the Nordic market. They were burning €15,000 monthly on AWS Egress fees and EBS Provisioned IOPS. Their latency to Oslo users was "okay" (25-35ms from Frankfurt). By shifting their core database and storage layer to a robust VPS Norway setup and keeping only the frontend scaling logic on AWS, they cut the bill by 60% and dropped latency to 3ms. Here is how we architected it.
The Compliance Elephant: Schrems II & Datatilsynet
If you are handling Norwegian user data, the Schrems II ruling is still your biggest headache. While the Trans-Atlantic Data Privacy Framework is being discussed, it is not law yet. Storing PII (Personally Identifiable Information) exclusively on US-owned clouds remains a risk assessment nightmare.
The safest technical architecture is the "Data Residency Core":
- Stateless Frontends: Ephemeral containers on Hyperscalers (AWS/GCP). No data persists here.
- Stateful Backend: Postgres/MySQL clusters running on independent Nordic infrastructure (like CoolVDS).
- Encryption: WireGuard tunnels bridging the two.
Technical Implementation: The Hybrid Mesh
We don't use IPsec VPNs anymore if we can help it. They are bloated and slow context switching hurts throughput. We use WireGuard. It is in the Linux kernel (since 5.6), it is fast, and it handles roaming IP addresses gracefully.
1. The Secure Bridge (WireGuard)
We need a secure, low-latency pipe between your AWS Auto Scaling Group and your CoolVDS NVMe Database node. Here is the configuration for the CoolVDS side (the "Hub").
# /etc/wireguard/wg0.conf on the CoolVDS Instance
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey =
# Peer: AWS Worker Node 1
[Peer]
PublicKey =
AllowedIPs = 10.100.0.2/32
On the client side (AWS), ensure you set the PersistentKeepalive to 25 to keep the NAT mapping open through the AWS Security Groups.
Pro Tip: Don't rely on default MTU. AWS Jumbo frames support is spotty across regions. Stick to an MTU of 1360 on your WireGuard interface to account for VXLAN overhead if your traffic traverses multiple encapsulation layers.
2. Infrastructure as Code: Terraform State Strategy
Managing two providers requires a split-state approach to prevent locking issues. Do not mix your hyperscaler resources and your static core resources in the same tfstate file.
Here is a simplified directory structure for 2022-era Terraform projects:
/infrastructure
/core-norway (CoolVDS/KVM)
- main.tf (Defines persistent storage, db nodes)
- security.tf (Firewall rules restricting access to WireGuard port)
/edge-compute (AWS/GCP)
- asg.tf (Auto Scaling Groups)
- lambda.tf
In your core-norway/main.tf, you define the stable assets. Since CoolVDS offers standard KVM visualization, you treat these instances as "Pets" (or managed cattle), not ephemeral "Cattle" that die every hour. This allows for massive I/O optimization.
Optimizing I/O: Why Hardware Matters
The "Cloud" abstracts hardware, often to your detriment. "General Purpose" SSDs on major clouds often throttle you once you deplete your burst balance. In a database-heavy application, this "I/O wait" is the silent killer of performance.
When you control the virtualization layer on a platform like CoolVDS, you can tune the disk scheduler. For NVMe storage, the default Linux scheduler (mq-deadline or none) is usually correct, but always verify it inside your VM:
$ cat /sys/block/vda/queue/scheduler
[none] mq-deadline kyber bfq
If you see cfq (common in older kernels), change it immediately. NVMe drives do not need the rotational latency optimization that CFQ provides. It just adds overhead.
Load Balancing the Hybrid Traffic
You need a smart entry point. HAProxy is still the king of performance per watt in 2022. It allows you to route traffic based on health checks that span your multi-cloud environment.
Here is a snippet for haproxy.cfg that prioritizes local Norwegian traffic to your CoolVDS frontend, but spills over to AWS if the load spikes:
backend web_nodes
balance roundrobin
option httpchk HEAD /health HTTP/1.1\r\nHost:\ localhost
# Primary: CoolVDS Instance (Low Latency, Fixed Cost)
server no-primary 10.10.10.5:80 check weight 100 maxconn 500
# Backup/Burst: AWS Instance (Higher Latency, Variable Cost)
server aws-burst 10.100.0.2:80 check weight 10 backup
This configuration ensures that you maximize the utility of the fixed-cost resources you have already paid for before triggering the expensive metered billing of the hyperscaler.
The Latency Advantage: NIX and Peering
Physics is undefeated. If your customers are in Oslo, Bergen, or Trondheim, routing their requests to a datacenter in Dublin or Frankfurt adds 20-40ms of round-trip time (RTT). For a modern SPA (Single Page Application) that makes 50 API calls to render a dashboard, that latency stacks up to perceptible lag.
| Origin | Destination | Approx. Latency (RTT) |
|---|---|---|
| Oslo User | AWS Frankfurt (eu-central-1) | ~28ms |
| Oslo User | Google Finland (europe-north1) | ~18ms |
| Oslo User | CoolVDS (Oslo/NIX) | ~2ms |
Connecting directly to the Norwegian Internet Exchange (NIX) means your packets take the shortest path. This is crucial for VoIP, gaming, and real-time financial trading applications.
Conclusion: Own Your Core
The "All-In" cloud strategy is fading. The smart money in 2022 is on Hybrid. Use the cloud for what it is good at—bursting and managed services. But for your core data, your heavy I/O processing, and your compliance peace of mind, own the metal.
CoolVDS isn't just another vendor; we are the foundation for this hybrid architecture. We provide the raw, unthrottled KVM performance and ddos protection that allows you to build a fortress in Norway, while keeping the drawbridge open to the rest of the world.
Next Step: Run a simple fio disk benchmark on your current cloud provider. Then spin up a test instance with us. Compare the IOPS per dollar. The results usually speak for themselves.