The Cloud Agnostic Myth vs. Reality
It is January 2018. We are exactly four months away from the GDPR enforcement deadline of May 25th, and the panic in European boardrooms is palpable. If you are a CTO or a Lead Architect operating out of Oslo, you are likely facing a contradictory set of demands: "Move everything to the cloud for scalability" but also "Keep our customer data sovereign and compliant."
The hyperscalersâAWS, Azure, and Google Cloudâpromise the world. But anyone who has received an unexpected bill for outbound data transfer or provisioned IOPS knows the honeymoon phase ends quickly. Total Cost of Ownership (TCO) spirals when you treat a cloud provider like a rental car that you drive 24/7.
The solution isn't to abandon the public cloud. It is to stop being a "cloud user" and start being a "cloud architect." By leveraging a Multi-Cloud strategy, you can use AWS for what it's good atâelastic compute and S3 storageâwhile keeping your core databases and sensitive workloads on high-performance, predictable infrastructure right here in Norway. This is the Hybrid Core approach.
Why Latency and Sovereignty Rule the North
Physics is stubborn. Even with AWS Frankfurt (eu-central-1) or the rumored upcoming Nordic regions from major players, the latency penalty for local Norwegian traffic exists. Data packets traveling from Oslo to Frankfurt and back face a round-trip time (RTT) of roughly 25-35ms. That sounds negligible until you have a Magento backend executing 50 sequential database queries per page load. Suddenly, you are staring at a 1.5-second delay before the Time to First Byte (TTFB).
Furthermore, the Datatilsynet (Norwegian Data Protection Authority) is making it very clear: knowing exactly where your data physically sits is not optional anymore. While the EU-US Privacy Shield is currently holding (barely), reliance on US-controlled infrastructure for storing fødselsnummer (national ID numbers) or health data is a risk many are unwilling to take.
The Architecture: The "Split-Brain" Setup
Here is a battle-tested architecture we are deploying for clients this quarter:
- The Frontend (AWS/Public Cloud): Auto-scaling groups of stateless web servers. They handle traffic bursts.
- The Backend (CoolVDS Norway): The database master (MySQL/PostgreSQL) and Redis cache. This runs on NVMe storage where I/O is consistent and doesn't cost extra.
- The Bridge: A persistent Site-to-Site VPN tunnel utilizing IPsec.
Step 1: Infrastructure as Code with Terraform (v0.11)
We use HashiCorp's Terraform to manage this state. In version 0.11 (the current standard), we still have to deal with the interpolation syntax, but it works. Do not manually click around the AWS console. If it isn't in code, it doesn't exist.
Here is how we define the AWS side of the bridge:
resource "aws_vpn_connection" "norway_bridge" {
vpn_gateway_id = "${aws_vpn_gateway.main.id}"
customer_gateway_id = "${aws_customer_gateway.coolvds_node.id}"
type = "ipsec.1"
static_routes_only = true
tags {
Name = "Oslo-Bridge-VPN"
}
}
resource "aws_vpn_connection_route" "office" {
destination_cidr_block = "10.20.0.0/16" # The CoolVDS private subnet
vpn_connection_id = "${aws_vpn_connection.norway_bridge.id}"
}
Step 2: The Tunnel (StrongSwan)
On the CoolVDS side, we aren't using a managed VPN gateway because we want full control and zero per-hour fees. We use a dedicated KVM instance running StrongSwan. It is robust, supports IKEv2, and is the industry standard for Linux-based IPsec.
Install StrongSwan on your CoolVDS instance:
apt-get update && apt-get install strongswan -y
Configure /etc/ipsec.conf to connect to the AWS VPN endpoints. Note the strict encryption proposals to match AWS requirements:
config setup
charondebug="all"
uniqueids=yes
conn oslo-to-aws
type=tunnel
auto=start
keyexchange=ikev1
authby=secret
# CoolVDS Local settings
left=%defaultroute
leftid=185.x.x.x # Your CoolVDS Public IP
leftsubnet=10.20.0.0/16
# AWS Remote settings
right=52.x.x.x # AWS Tunnel IP
rightsubnet=172.31.0.0/16
# Encryption standards compatible with AWS in 2018
ike=aes128-sha1-modp1024
esp=aes128-sha1-modp1024
Note: Ensure you enable IP forwarding in /etc/sysctl.conf by setting net.ipv4.ip_forward=1, otherwise packets will hit the server and die there.
The Storage Advantage: NVMe vs. EBS
This is where the "Pragmatic CTO" mindset kicks in. On AWS, General Purpose SSD (gp2) volumes are fine for boot disks. But if you need sustained high IOPS for a heavy MySQL workload, you are forced into Provisioned IOPS (io1) volumes, which are incredibly expensive. You pay for every IOPS, every month.
Pro Tip: CPU Steal is the silent killer of cloud databases. In a multi-tenant environment like EC2, "noisy neighbors" can steal CPU cycles, causing latency spikes in database transactions.
At CoolVDS, our architecture relies on local NVMe storage passed through via KVM. We don't throttle your IOPS to upsell you a higher tier. If the drive can do 50,000 IOPS, you get 50,000 IOPS. For a database server, this raw throughput is critical.
Benchmarking I/O: Fio
Don't take my word for it. Run fio on your current instance and compare it to a CoolVDS slice.
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=1 --runtime=60 --group_reporting
On a standard cloud instance without provisioned IOPS, you will often see this cap out around 3,000 IOPS. On local NVMe, you should expect numbers significantly higher, ensuring your database locks clear instantly.
Compliance and the "NIX" Factor
By keeping the database layer on CoolVDS in Oslo, you achieve two things:
- GDPR Safety: The "crown jewels" (user data) reside physically in Norway. Even if the web servers in Frankfurt process the data, the storage at rest is within local jurisdiction.
- NIX Connectivity: CoolVDS is peered directly at NIX (Norwegian Internet Exchange). If your customers are Norwegian, their traffic hits our network almost immediately after leaving their ISP (Telenor, Telia, Altibox). Low hops mean low latency.
Conclusion: Balance is Key
Going "All-In" on one provider is a strategy for 2015. In 2018, the smart move is hybrid. Use the massive scale of public clouds for your frontend delivery, CDNs, and sporadic compute tasks. But for the heavy liftingâthe databases, the storage, and the compliance-heavy workloadsâanchor your infrastructure on dedicated, high-performance VDS resources.
Your database deserves NVMe, and your budget deserves a break. Stop paying rent on IOPS.
Ready to secure your data in Norway before May 25th? Spin up a high-performance KVM instance on CoolVDS today and test the latency yourself.