It has been exactly three weeks since the GDPR enforcement date, and the panic is palpable. CTOs across Oslo are scrambling, realizing that keeping Norwegian customer data on US-controlled serversâeven those in Frankfurt or Dublinâis a legal minefield waiting to detonate. Iâve spent the last decade architecting systems from Tromsø to Berlin, and if there is one lesson I have learned the hard way, it is this: Vendor lock-in is a terminal illness for infrastructure.
Relying 100% on a hyperscaler like AWS or Google Cloud isn't just expensive; itâs a single point of failure. We saw this recently with a client whose entire logistics platform went dark because of an S3 outage in US-East-1 that cascaded globally. Their SLA meant nothing when the trucks stopped moving. Today, we build a Multi-Cloud architecture that actually worksânot the marketing fluff, but a battle-tested setup using Terraform and HAProxy.
The Architecture: The "Sovereign Core" Strategy
The most robust pattern for Norwegian businesses right now is the Hybrid approach. We keep the "State" (Databases, Customer Data) on high-performance, compliant infrastructure within Norway (CoolVDS), and we use the "Hyperscalers" for stateless, burstable compute.
Pro Tip: Latency matters. A round trip from Oslo to Frankfurt is ~25-30ms. From Oslo to a CoolVDS instance in Oslo? <2ms via NIX (Norwegian Internet Exchange). For database queries, that physics problem adds up fast.
Step 1: The Data Fortress (CoolVDS)
Your database does not belong on ephemeral cloud storage where IOPS costs bleed you dry. We deploy the primary database on a CoolVDS KVM instance. Why? Because we need raw NVMe performance without the "noisy neighbor" effect common in oversubscribed public clouds. We also need to satisfy Datatilsynet by guaranteeing the physical location of the data.
Here is a production-ready my.cnf snippet for a MariaDB 10.2 server (standard in 2018) running on a 16GB RAM CoolVDS instance. Note the buffer pool sizing:
[mysqld]
# InnoDB Optimization for SSD/NVMe
innodb_buffer_pool_size = 12G
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 1 # ACID compliance is non-negotiable here
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000 # Unleashing the NVMe potential
innodb_io_capacity_max = 4000
# Connection Handling
max_connections = 500
wait_timeout = 600
Step 2: The Stateless Frontline (Public Cloud)
We use AWS or DigitalOcean for the frontend application servers. These are disposable. If a region goes down, we spin them up elsewhere via Terraform. However, they must connect securely back to our Core database in Norway.
Since we are in mid-2018, Terraform v0.11 is our tool of choice. It lacks the loop refinements of future versions, but it gets the job done. Here is how we provision the frontend nodes, ensuring they have the correct security groups to talk to our VPN gateway:
# main.tf (Terraform v0.11)
provider "aws" {
region = "eu-central-1"
}
resource "aws_instance" "frontend_node" {
count = 3
ami = "ami-0bdf93799014acdc4" # Ubuntu 16.04 LTS
instance_type = "t2.medium"
tags {
Name = "Frontend-Worker-${count.index + 1}"
Role = "Stateless-Compute"
}
# Bootstrap script to install VPN client
user_data = "${file("bootstrap_vpn.sh")}"
}
resource "aws_security_group" "allow_vpn_tunnel" {
name = "allow_vpn_tunnel"
description = "Allow traffic from CoolVDS Gateway"
ingress {
from_port = 1194
to_port = 1194
protocol = "udp"
cidr_blocks = ["185.x.x.x/32"] # Strictly limit to your CoolVDS IP
}
}
Connecting the Clouds: The VPN Bridge
You cannot expose your database to the public internet. Period. We establish a site-to-site VPN using OpenVPN (IPsec is an alternative, but OpenVPN is often easier to debug in dynamic environments). The CoolVDS instance acts as the VPN Server, and the cloud instances act as clients.
Traffic Routing with HAProxy
To manage traffic between these nodes, we deploy HAProxy 1.8 on the CoolVDS side as an ingress controller for Norwegian traffic, ensuring local users get local speeds. If the load spikes, we offload to the cloud nodes.
Here is the critical haproxy.cfg configuration to manage this split. We use health checks to ensure we aren't sending traffic to a dead cloud node:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
# Redirect to HTTPS
redirect scheme https code 301 if !{ ssl_fc }
frontend https_front
bind *:443 ssl crt /etc/haproxy/certs/site.pem
default_backend app_servers
backend app_servers
balance roundrobin
option httpchk HEAD /health HTTP/1.1\r\nHost:\ example.com
# Local CoolVDS Node (Primary - Low Latency)
server local-node-1 127.0.0.1:8080 check weight 10
# Remote Cloud Nodes (Burst - Via VPN Tunnel IPs)
server aws-node-1 10.8.0.2:8080 check weight 5 fall 3 rise 2
server aws-node-2 10.8.0.3:8080 check weight 5 fall 3 rise 2
Why Hybrid is the Only Logical Choice
Pure cloud is expensive and legally risky. Pure on-premise is hard to scale. By placing your persistence layer on CoolVDS, you gain:
- Data Sovereignty: Your data physically resides in Norway, complying with the strictest interpretation of GDPR.
- Performance: NVMe storage on CoolVDS KVM instances outperforms network-attached block storage found in most basic cloud tiers.
- Cost Control: You pay a flat rate for your core infrastructure. You only pay variable costs for the burst compute during Black Friday or tax season.
We are seeing more devs moving away from the "all-in" AWS mentality. The tools we have in 2018âTerraform, Ansible, and robust VPNsâmake the hybrid model not just possible, but superior. Do not let lazy architecture compromise your compliance or your budget.
Ready to secure your data sovereignty? Deploy your Core Node on a CoolVDS NVMe instance today. It takes 55 seconds to provision, and latency to NIX is negligible. Build your fortress here, then expand anywhere.