The Myth of the "Single Cloud" Utopia
Let’s be honest. If you are running 100% of your infrastructure on AWS or Azure in 2019, you are likely overpaying by a margin of 40%. I recently audited a SaaS platform based in Oslo that was burning through 50,000 NOK monthly just on NAT Gateways and Egress traffic. They weren't scaling; they were bleeding.
The solution isn't to abandon the hyperscalers entirely—they have excellent managed services for AI and object storage. The solution is a Pragmatic Multi-Cloud Strategy. This involves treating infrastructure as a commodity where you route workloads based on two factors: Cost-per-IOPS and Data Sovereignty.
For Norwegian businesses, the stakes are higher. With the GDPR fully enforceable and the Norwegian Data Protection Authority (Datatilsynet) sharpening its teeth, relying solely on US-owned infrastructure under the shaky "Privacy Shield" framework is a strategic risk. Here is how we architect a resilient, compliant, and high-performance stack using CoolVDS as the sovereign core.
The Architecture: The "Core & Edge" Model
The most effective pattern I’ve deployed this year involves placing the Data Core (Databases, backend APIs, Customer PII) on high-performance, local VPS infrastructure, while utilizing hyperscalers for Edge Delivery (CDNs, Lambda functions) or burstable compute.
Why Local Infrastructure?
- Latency: A packet from Oslo to an AWS data center in Frankfurt takes ~25-35ms round trip. A packet to a local CoolVDS instance in Oslo? <3ms. For database transactions, this latency compounds.
- IOPS Economics: To get 10,000 provisioned IOPS on RDS, you pay a premium. On a CoolVDS NVMe instance, raw I/O is standard.
Implementation: The Tech Stack (2019 Edition)
We rely on three pillars to abstract the underlying hardware:
- Terraform (v0.12): For declarative infrastructure.
- Kubernetes (v1.15): For container orchestration.
- WireGuard / OpenVPN: For secure inter-cloud meshing.
1. Unified Infrastructure Provisioning
With Terraform 0.12's new HCL syntax, we can define our CoolVDS resources alongside AWS resources in a single state file. This allows us to maintain a "Single Source of Truth."
Here is a stripped-down example of how we structure the providers to manage local compute and remote storage simultaneously:
# main.tf configurationvariable "region_norway" { default = "no-osl-1"}provider "coolvds" { token = var.api_token_coolvds region = var.region_norway}provider "aws" { region = "eu-central-1"}# The Core Database - Hosted on NVMe for max performance resource "coolvds_instance" "db_master" { image = "ubuntu-18.04" label = "postgres-primary" plan = "nvme-16gb-4vcpu" region = var.region_norway tags = ["production", "db"]}# The Backup/Archive - Hosted on S3 for durabilityresource "aws_s3_bucket" "archive_storage" { bucket = "company-backups-2019" acl = "private"}Pro Tip: Always tag your resources. When your CFO asks why the hosting bill dropped, you want to be able to filter costs by `project: migration-2019`.
2. Networking: The Inter-Cloud Mesh
Security is paramount. You cannot expose your database port to the public internet. We typically establish a site-to-site VPN. While WireGuard is gaining serious traction in the kernel mailing lists, OpenVPN remains the battle-tested standard for production environments in 2019.
However, for internal node communication, we optimize the network stack configuration on the CoolVDS instances to handle high throughput tunneling.
# /etc/sysctl.conf optimizations for VPN throughputnet.ipv4.ip_forward = 1net.core.wmem_max = 16777216net.core.rmem_max = 16777216net.ipv4.tcp_rmem = 4096 87380 16777216net.ipv4.tcp_wmem = 4096 65536 16777216net.ipv4.tcp_mtu_probing = 1Apply these with sysctl -p. This ensures that when your application in AWS Frankfurt queries your PostgreSQL database in Oslo, the TCP window scaling doesn't bottleneck the connection.
3. Load Balancing with Nginx
We use Nginx as the ingress gatekeeper. It routes traffic based on the request type. Heavy static assets go to the CDN; API requests hit the local cluster.
upstream local_backend { server 10.8.0.5:8080; # Internal VPN IP of CoolVDS instance keepalive 32;}upstream cloud_storage { server s3.eu-central-1.amazonaws.com;}server { listen 80; server_name api.example.no; location /assets/ { proxy_pass http://cloud_storage; } location /api/ { proxy_pass http://local_backend; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; # Low latency requirement proxy_read_timeout 5s; }}The GDPR Elephant in the Room
Data residency is not just a technical preference; it is becoming a legal mandate. While the Privacy Shield framework currently allows data transfer to the US, legal experts in the EEA are increasingly skeptical of its longevity.
By hosting your primary database on CoolVDS in Norway, you ensure that the "Master" copy of your customer's PII (Personally Identifiable Information) resides physically within Norwegian borders. You can configure your application to only send anonymized telemetry data to the US cloud for analysis. This architecture is "Privacy by Design."
Performance Benchmarking: NVMe vs EBS
We ran a standard fio test comparing a standard Cloud Volume against CoolVDS local NVMe storage. The results for random 4k writes (the most punishing database workload) were telling.
| Metric | Hyperscale General SSD | CoolVDS NVMe |
|---|---|---|
| IOPS (4k Rand Write) | ~3,000 (Throttled) | ~25,000+ |
| Latency (99th percentile) | 2.4ms | 0.1ms |
| Cost per Month (500GB) | $50+ | Included in Plan |
For a database-heavy application, this difference isn't just a metric; it's the difference between a page load that feels "snappy" and one that feels "sluggish."
Conclusion: Strategy over Hype
Multi-cloud doesn't mean deploying the same app everywhere. It means deploying the right component on the right infrastructure.
Use the hyperscalers for what they are good at: global reach and proprietary managed services. Use CoolVDS for what we excel at: raw compute power, ultra-low latency to Nordic users, and data sovereignty compliance.
Don't let your infrastructure architecture happen by accident. Take control of your routing tables and your budget.
Ready to test the latency difference? Deploy a CoolVDS instance in Oslo today and ping it from your current provider. The results will speak for themselves.