Console Login

Escaping the Vendor Trap: A Pragmatic Multi-Cloud Strategy for 2019

Beyond the Hype: Architecting a GDPR-Compliant Multi-Cloud Strategy for 2019

If 2018 taught us anything, it’s that putting all your eggs in one hyperscaler’s basket is a liability. Between the implementation of GDPR in May and the passing of the US CLOUD Act in March, the "Cloud First" mantra has shifted to "Compliance First." As we look toward 2019, the role of a CTO isn't just about selecting the fastest server; it's about navigating a minefield of data sovereignty laws without bankrupting the company on bandwidth fees.

I speak to peers in Oslo every week who are terrified of vendor lock-in. They built everything on proprietary AWS services like DynamoDB or Lambda, and now, faced with rising costs and legal ambiguity regarding the Privacy Shield framework, they are stuck. A pragmatic multi-cloud strategy isn't about redundancy for the sake of it—it's an insurance policy for your data and your budget.

The Architecture: Hybrid Sovereignty

The most robust pattern emerging in the Nordic market right now is the "Hybrid Sovereignty" model. This leverages the massive CDN and edge compute capabilities of US giants (AWS, Google Cloud) for stateless content delivery, while anchoring the database and sensitive customer logs on jurisdictionally safe ground—specifically, Norwegian soil.

Pro Tip: Don't just look at ping times. Look at the routing table. A VPS hosted in Oslo connecting via NIX (Norwegian Internet Exchange) will consistently outperform a Stockholm-based hyperscaler instance for Norwegian users due to reduced hops and lack of cross-border routing anomalies.

The Glue: Infrastructure as Code (Terraform)

Managing two distinct environments manually is a recipe for disaster. In 2018, Terraform has solidified itself as the standard for cloud-agnostic provisioning. While Ansible is great for configuration management, Terraform handles the state of your infrastructure best.

Below is a Terraform (v0.11 syntax) example of how we structure a deployment that spans a US cloud and a KVM-based provider like CoolVDS. Note the separation of providers.

# provider.tf

# The Hyperscaler (for frontend/stateless)
provider "aws" {
  region = "eu-central-1"
  alias  = "frontend"
}

# The Sovereign Host (CoolVDS - via generic OpenStack or custom provider)
# In 2018, we often use the standard OpenStack provider for KVM clouds
provider "openstack" {
  user_name   = "${var.coolvds_user}"
  tenant_name = "${var.coolvds_project}"
  password    = "${var.coolvds_pass}"
  auth_url    = "https://auth.coolvds.com:5000/v3"
  region      = "Oslo"
}

Networking: The Site-to-Site Tunnel

Latency between clouds is the killer. If your app server is in Frankfurt (AWS) and your database is in Oslo (CoolVDS), you need a persistent, optimized tunnel. We avoid standard public internet routing for database traffic where possible. However, if dedicated lines are out of budget, a hardened StrongSwan IPsec tunnel is the industry standard.

Do not use default encryption settings. Intel AES-NI acceleration on modern Xeon CPUs (standard on our CoolVDS NVMe nodes) means you can crank up encryption without killing throughput.

# /etc/ipsec.conf (StrongSwan on CoolVDS Gateway)

conn oslo-to-frankfurt
    authby=secret
    auto=start
    keyexchange=ikev2
    ike=aes256-sha256-modp2048!
    esp=aes256-sha256-modp2048!
    left=%defaultroute
    leftid=185.x.x.x  # Your CoolVDS Static IP
    leftsubnet=10.10.0.0/24
    right=35.x.x.x    # AWS VPN Gateway IP
    rightsubnet=172.16.0.0/16
    type=tunnel
    dpddelay=30
    dpdtimeout=120
    dpdaction=restart

Data Persistence & Compliance

Here is where the "Pragmatic" part of the title comes in. The US CLOUD Act allows US federal law enforcement to compel US-based tech companies to provide requested data, regardless of whether that data is stored in the US or on a server in Europe. This is why Datatilsynet (The Norwegian Data Protection Authority) advises caution.

By hosting your core PostgreSQL or MySQL database on a Norwegian-owned KVM instance, you add a significant layer of legal protection. You own the data. It resides on hardware owned by a Norwegian entity, subject primarily to Norwegian law.

Performance Tuning for Remote Databases

If you split compute and storage, you must optimize for latency. TCP overhead can be significant. On your CoolVDS database nodes, you should aggressively tune the kernel for network throughput to handle the traffic coming through the VPN.

Add these to /etc/sysctl.conf:

# TCP Optimizations for High Throughput/WAN
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_no_metrics_save = 1
net.core.netdev_max_backlog = 5000

Reload with sysctl -p. This allows the TCP window to scale up effectively over the few milliseconds of latency between Northern Europe and Central Europe.

The Cost Reality: Bandwidth

Hyperscalers charge egregious amounts for Egress (outbound) traffic. If you host your DB in AWS and your users download reports, you pay per gigabyte. CoolVDS offers generous bandwidth packages, often unmetered on specific tiers. A smart architect uses the CoolVDS instance as the heavy lifter for data transfer and backups, using the hyperscaler only for burstable CPU tasks.

Why KVM Over Containers for the Core?

Docker is fantastic—we use it everywhere. But for the persistent data layer, the isolation of KVM (Kernel-based Virtual Machine) is superior. In a shared container environment, a "noisy neighbor" can saturate the kernel's I/O schedule. CoolVDS uses strict KVM virtualization with NVMe storage. This guarantees that your disk I/O operations per second (IOPS) are yours and yours alone. When you are writing transaction logs for a financial application, consistent latency is more valuable than raw peak throughput.

Implementation Strategy

  1. Audit: Identify which datasets contain PII (Personally Identifiable Information). Isolate them.
  2. Migrate: Move PII-heavy databases to CoolVDS instances in Oslo.
  3. Connect: Establish the IPsec tunnel using the config above.
  4. Proxy: Use Nginx on the frontend to route requests.
# nginx.conf snippet for routing traffic
upstream secure_backend {
    server 10.10.0.5:8080; # Internal IP over VPN to CoolVDS
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name api.yourdomain.no;
    
    location /secure-data/ {
        proxy_pass http://secure_backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Final Thoughts

The multi-cloud approach requires more initial setup than a simple "click-to-deploy" on a single platform. However, the dividends it pays in compliance assurance, cost control, and negotiation power are massive. By anchoring your infrastructure in Norway with CoolVDS, you gain the legal clarity of local hosting combined with the technical freedom to connect anywhere.

Do not let data sovereignty be an afterthought in your 2019 roadmap. Audit your data flows today, and if you need a testbed with raw NVMe performance and local peering, spin up a CoolVDS instance. It takes less than a minute to start building your safe haven.