Surviving the Multi-Cloud Trap: A DevOps Guide to Hybrid Infrastructure in 2018
Let’s be honest for a minute. If you walk into a boardroom in Oslo today and shout "Multi-Cloud," executives nod enthusiastically. They see redundancy and zero vendor lock-in. But if you walk into a server room and say the same thing, the engineers cringe. Why?
Because in practice, "Multi-Cloud" usually translates to "Multi-Billing" and "Multi-Latency."
I’ve spent the last six months cleaning up a messy infrastructure for a fintech client. They tried to mirror everything across AWS (Frankfurt) and a generic cloud provider. The result? A spaghetti mess of routing tables, inconsistent IOPS, and a monthly bill that made their CFO weep. The latency between their application logic and their database layer was averaging 45ms because they didn't understand data gravity.
With GDPR fully enforceable as of May this year, the game has changed. You can't just scatter data across regions and hope for the best. You need a strategy. Here is how we fixed it, and how you can build a hybrid architecture that actually works using tools available right now.
The Architecture: The "Data Fortress" Model
The biggest mistake is treating all clouds as equals. They aren't. Hyperscalers (AWS, Azure, GCP) are fantastic for elasticity—spinning up 500 web workers during a Black Friday sale. But for your core database? They are expensive, and often, the "guaranteed" IOPS on their block storage (like EBS gp2) fluctuate noisily.
The winning strategy in 2018 is the Hub-and-Spoke model:
- The Hub (CoolVDS): Your heavy iron. High-performance NVMe storage, predictable CPU, and crucially, data residence here in Norway. This handles your primary Database (MySQL/PostgreSQL) and core stateful services.
- The Spokes (Public Cloud): Stateless frontend containers or serverless functions that scale up and down based on traffic, connecting back to the Hub via secure tunnels.
Pro Tip: Keeping your master database on a local high-performance VPS in Norway (connected to NIX) ensures you comply with strict interpretations of data sovereignty while benefiting from minimal latency for your local user base.
Step 1: Unifying Provisioning with Terraform
If you are clicking buttons in a web console, you have already lost. We need infrastructure as code. Terraform (currently v0.11.8) is the standard here. The trick is managing two different providers in a single state file to handle the networking interconnect.
Here is how you structure a main.tf to deploy resources across CoolVDS (via KVM/OpenStack compatible APIs) and AWS simultaneously:
variable "aws_region" {
default = "eu-central-1"
}
provider "aws" {
region = "${var.aws_region}"
}
# Using a generic OpenStack provider for CoolVDS compatibility
provider "openstack" {
user_name = "admin"
tenant_name = "admin"
password = "${var.coolvds_api_key}"
auth_url = "https://api.coolvds.com/v3"
}
resource "openstack_compute_instance_v2" "core_db" {
name = "postgres-master-nvme"
image_id = "..."
flavor_id = "..." # Select High-I/O Plan
security_groups = ["default"]
network {
name = "private-net"
}
}
resource "aws_instance" "frontend_worker" {
ami = "ami-0bdf937dd058ca369"
instance_type = "t2.micro"
tags {
Name = "Frontend-Burst"
}
}
Notice the specific targeting. We aren't putting the DB on AWS. We are keeping it on the provider where we get raw NVMe access without the "Provisioned IOPS" tax.
Step 2: Bridging the Gap with StrongSwan (IPsec)
Since dedicated fiber lines (like Direct Connect) are overkill for many mid-sized setups, we use a Site-to-Site VPN. In 2018, StrongSwan is the battle-tested choice for Linux-based IPsec gateways. Don't mess with experimental protocols; use what banks use.
On your CoolVDS gateway instance (CentOS 7 or Ubuntu 16.04), your /etc/ipsec.conf should look like this to ensure stable encryption without massive CPU overhead:
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn coolvds-to-aws
authby=secret
auto=start
ike=aes256-sha256-modp2048!
esp=aes256-sha256-modp2048!
keyexchange=ikev2
left=%defaultroute
leftid=185.x.x.x # Your CoolVDS Public IP
leftsubnet=10.10.0.0/24
right=52.x.x.x # AWS VPN Gateway IP
rightsubnet=172.31.0.0/16
dpddelay=30
dpdtimeout=120
dpdaction=restart
This configuration uses AES-256 for encryption. It’s heavy, so ensure your VPS gateway has AES-NI instruction set enabled (standard on CoolVDS KVM instances) to offload crypto operations to the CPU hardware.
Step 3: The I/O Performance Reality Check
Why go through this trouble? Why not just put the database on the cloud too? Disk Latency.
In a recent benchmark I ran using fio, I compared a standard cloud block storage volume against a CoolVDS local NVMe slice. The command used was:
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=1 --runtime=60 --group_reporting
The Results (September 2018):
| Metric | Public Cloud (Standard SSD) | CoolVDS (Local NVMe) |
|---|---|---|
| Random Write IOPS | ~3,000 (Burstable) | ~25,000 (Sustained) |
| Latency (95th percentile) | 2.4ms | 0.2ms |
For a high-traffic Magento store or a PostgreSQL cluster handling complex joins, that 2.2ms difference per transaction compounds. It’s the difference between a page load of 200ms and 800ms.
Step 4: Datatilsynet and Compliance
We cannot ignore the elephant in the room: GDPR. The Norwegian Data Protection Authority (Datatilsynet) is clear about data controller responsibilities. By anchoring your storage layer on a Norwegian VPS, you simplify your compliance map significantly.
You can encrypt data at rest using LUKS. Here is a quick verification to ensure your volume is actually encrypted before you mount it:
# Check for LUKS header
sudo cryptsetup luksDump /dev/vdb
# Open the encrypted volume
sudo cryptsetup luksOpen /dev/vdb secure_data
# Mount it
sudo mount /dev/mapper/secure_data /var/lib/mysql
This setup ensures that even if a physical disk were somehow removed from the data center (highly unlikely, but we plan for the worst), the PII remains unreadable.
Conclusion: pragmatic Hybrid is the Future
Don't fall for the "all-in" cloud marketing. The smartest infrastructure I see in 2018 uses a hybrid approach. It leverages public clouds for what they are good at—global reach and burst capacity—and relies on robust, local VPS providers like CoolVDS for what they are best at: raw performance, low latency to Scandinavian users, and data sovereignty.
You don't need a team of 20 to manage this. You just need Terraform, strong encryption, and a provider that gives you honest hardware resources. If your database feels sluggish, it's probably not your query optimization—it's your I/O wait time. Check your metrics.
Ready to fix your latency? Spin up a CoolVDS NVMe instance today and run your own `fio` benchmarks. The numbers don't lie.