Console Login

Escaping Vendor Lock-in: A Pragmatic Multi-Cloud Architecture for the Nordic Market

Escaping Vendor Lock-in: A Pragmatic Multi-Cloud Architecture for the Nordic Market

Let’s be honest: putting all your eggs in the AWS or Google Cloud basket is terrifying. It’s convenient, yes. But when a region goes down (and us-east-1 always goes down eventually), or when pricing models shift overnight, you are helpless. For Norwegian CTOs, there is an added layer of complexity: Data Sovereignty.

With GDPR fully enforced for over a year now, the legal gray area regarding US CLOUD Act data access is widening. The smartest infrastructure strategy in 2019 isn't "All-in-Cloud"—it's Multi-Cloud Hybrid. This approach leverages the raw compute power of local providers for core data persistence while using hyperscalers for burstable edge traffic.

Here is how we build a fault-tolerant, compliant, and cost-effective architecture that links a CoolVDS NVMe instance in Oslo with a secondary failover node in Frankfurt.

1. The Architecture: Core vs. Edge

The biggest mistake I see is treating all cloud providers as equals. They aren't. Hyperscalers charge exorbitant fees for egress bandwidth and IOPS. Local providers like CoolVDS offer predictable pricing and superior disk I/O.

The Strategy:

  • Core (Oslo): Primary Database (MySQL/PostgreSQL) and backend logic. Hosted on high-performance KVM instances with NVMe storage. This keeps customer data under Norwegian jurisdiction and reduces latency to the NIX (Norwegian Internet Exchange).
  • Edge (Frankfurt/London): Stateless frontend containers or load balancers. These can be spun up on AWS/Azure during traffic spikes and destroyed when not needed.

2. Infrastructure as Code: Adapting to Terraform 0.12

Manual configuration is a death sentence for multi-cloud. You need a single source of truth. With the release of Terraform 0.12 just last month, handling complex variables across different providers became significantly cleaner. We no longer need hacky workarounds for conditional expressions.

Here is how you define a resilient infrastructure state that deploys a frontend on AWS and provisions your core backend on a KVM-based VPS (accessed via SSH/Provisioner for bare-metal performance).

resource "aws_instance" "edge_node" {
  ami           = "ami-0c55b159cbfafe1f0" # Ubuntu 18.04 LTS
  instance_type = "t3.micro"

  tags = {
    Name = "Edge-Frankfurt"
  }
}

resource "null_resource" "core_node_provisioner" {
  # connecting to our static high-performance CoolVDS instance
  connection {
    type     = "ssh"
    user     = "root"
    host     = var.coolvds_ip
    private_key = file("~/.ssh/id_rsa")
  }

  provisioner "remote-exec" {
    inline = [
      "apt-get update",
      "apt-get install -y nginx",
      "systemctl start nginx"
    ]
  }
}

3. The Network Glue: Site-to-Site VPN

Private networking between providers usually costs a fortune in "Direct Connect" fees. The pragmatic solution is a robust OpenVPN tunnel. Since CoolVDS provides raw internet access with high bandwidth caps, we can establish a secure tunnel between our Oslo core and the AWS VPC.

Don't use default encryption settings. In 2019, AES-256-GCM is the standard for performance and security on modern CPUs supporting AES-NI instructions.

Server Config (Oslo Core)

Edit /etc/openvpn/server.conf:

port 1194
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem

# Security hardening
cipher AES-256-GCM
auth SHA256
tls-version-min 1.2

# Network topology
server 10.8.0.0 255.255.255.0
push "route 10.10.0.0 255.255.255.0" # Route to internal CoolVDS private network
keepalive 10 120
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
verb 3
Pro Tip: Latency from Oslo to Frankfurt is roughly 15-20ms. If your application is chatty, this latency adds up. Always enable tcp_nodelay in your Nginx upstream configs to disable Nagle’s algorithm.

4. Data Consistency & Performance

Running a database across clouds is tricky. The laws of physics apply. For our topology, we use Master-Slave replication. The Master lives on CoolVDS (NVMe storage is critical here) for write performance. The Slave lives on the remote cloud for read-scalability and disaster recovery.

On the Master node, ensure your InnoDB settings utilize the available RAM. Most default VPS templates ship with tiny buffer pools. If you have a 16GB RAM instance, allocate 70-80% to InnoDB.

MySQL Optimization (my.cnf)

[mysqld]
# Basic settings
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
datadir         = /var/lib/mysql
log-error       = /var/log/mysql/error.log

# NVMe Optimization
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000  # Crank this up for NVMe!
innodb_io_capacity_max = 4000

# Memory Allocation
innodb_buffer_pool_size = 12G
innodb_log_file_size = 512M

# Replication
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW

The innodb_io_capacity is often set to 200 for spinning rust (HDD). On a CoolVDS NVMe drive, leaving it at 200 is a crime against performance. Set it to at least 2000 to utilize the high IOPS capability.

5. Load Balancing with HAProxy

Finally, you need an intelligent traffic cop. HAProxy is still the king of performance in 2019. It handles connection pooling better than Nginx in many scenarios. We place HAProxy on the edge to route write-requests to Oslo and read-requests to the local read-replica.

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend http_front
    bind *:80
    default_backend web_servers

backend web_servers
    balance roundrobin
    # The 'check' parameter is vital for high availability
    server oslo_primary 10.8.0.1:80 check inter 2000 rise 2 fall 3
    server frankfurt_backup 10.8.0.2:80 check backup

Why Hybrid is the Only Way Forward

Pure cloud is expensive and legally complex. Pure on-prem is hard to scale. The hybrid model—using a robust, high-performance base like CoolVDS for your data gravity and leveraging the public cloud for elasticity—provides the best TCO.

By keeping your primary data in Norway, you satisfy GDPR compliance requirements and ensure the lowest possible latency for your Nordic user base. Don't let slow I/O kill your SEO or application responsiveness.

Ready to build a backbone that doesn't break? Deploy a high-performance NVMe instance on CoolVDS today and secure your data sovereignty.