Console Login

Escaping the Vendor Lock-In Trap: A Pragmatic Hybrid Cloud Strategy for 2017

The Public Cloud Hangover is Real

It is February 2017. For the last three years, the industry mantra has been "Cloud First." We migrated legacy monoliths to AWS EC2, we dumped assets into S3, and we patted ourselves on the back. But now the bills are arriving, and the CFO is asking why our monthly burn rate rivals the GDP of a small nation. Worse, the legal department is sweating over the General Data Protection Regulation (GDPR) enforcement looming next year, and the uncertainty surrounding the Privacy Shield agreement.

I have spent the last decade architecting systems across the Nordics, from high-frequency trading platforms in Oslo to content delivery networks serving the entire EU. The lesson is always the same: renting computers from a giant in Seattle or Dublin isn't always the answer.

The pragmatic solution isn't to abandon the cloud, but to stop treating it as a religion. We need a Multi-Cloud or Hybrid strategy. We need to place workloads where they belong based on three metrics: Latency, Legality, and IOPS/Cost ratio.

The Latency & Sovereignty Equation

If your user base is in Norway, routing every request through Frankfurt (AWS eu-central-1) or Ireland (eu-west-1) is physically inefficient. Light has a speed limit. Round-trip time (RTT) from Oslo to Frankfurt usually sits around 25-35ms. From Oslo to a local data center connected to NIX (Norwegian Internet Exchange)? Sub-2ms.

Then there is the data sovereignty issue. With Datatilsynet (The Norwegian Data Protection Authority) ramping up scrutiny, keeping sensitive customer databases physically within Norwegian borders is the safest hedge against regulatory flux. This is where a "Split-Architecture" shines.

The Architecture: Public Front, Private Core

The most robust setup I deploy today involves using public cloud for what it is good at—global content distribution and bursty frontend scaling—while keeping the heavy, stateful logic on high-performance local infrastructure like CoolVDS.

Pro Tip: Public cloud instances often throttle disk I/O (IOPS) unless you pay for "Provisioned IOPS." A dedicated KVM slice with direct NVMe access will outperform a standard cloud instance by a factor of 10x in database workloads for a fraction of the cost.

Step 1: The Traffic Director (HAProxy)

To orchestrate this, we don't rely on proprietary load balancers like ELB. We use HAProxy. It allows us to split traffic based on rules, not just availability zones. Here is a production-ready snippet for haproxy.cfg (v1.6) that routes static assets to a cloud bucket/CDN and dynamic requests to our high-performance CoolVDS backend in Oslo.

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend http_front
    bind *:80
    # ACL to identify static assets
    acl is_static path_end -i .jpg .gif .png .css .js
    
    # Route to Cloud if static, otherwise Local NVMe Backend
    use_backend cloud_cdn if is_static
    default_backend coolvds_norway_core

backend cloud_cdn
    mode http
    http-request set-header Host my-bucket.s3-website-eu-central-1.amazonaws.com
    server s3_backend my-bucket.s3-website-eu-central-1.amazonaws.com:80 check

backend coolvds_norway_core
    mode http
    balance roundrobin
    option httpchk GET /health HTTP/1.1\r\nHost:\ www.myapp.no
    # Low latency connection to local backend
    server app01 10.10.20.5:8080 check inter 2000 rise 2 fall 3
    server app02 10.10.20.6:8080 check inter 2000 rise 2 fall 3

Step 2: Infrastructure as Code (Terraform)

Managing two providers manually is a nightmare. In 2017, the tool of choice is HashiCorp's Terraform. While version 0.8 is still maturing, it is stable enough for production if you pin your versions. Do not use GUI panels; if it isn't in git, it doesn't exist.

We define our resources in a single main.tf, utilizing different providers. This allows us to spin up the cheap storage on AWS and the high-power compute on CoolVDS (via OpenStack/KVM provider or generic remote-exec) simultaneously.

# terraform 0.8 syntax

provider "aws" {
  region = "eu-central-1"
}

# Defining the Off-site Backup / Static Storage
resource "aws_s3_bucket" "static_assets" {
  bucket = "company-assets-prod-2017"
  acl    = "public-read"

  tags {
    Environment = "Production"
    Location    = "Frankfurt"
  }
}

# Note: For non-AWS VPS, we often use the 'null_resource' with provisioners 
# if a direct provider isn't available, or the OpenStack provider if supported.
resource "null_resource" "coolvds_provisioner" {
  triggers {
    instance_ids = "${var.coolvds_instance_id}"
  }

  connection {
    type     = "ssh"
    user     = "root"
    private_key = "${file("~/.ssh/id_rsa")}"
    host     = "${var.coolvds_ip}"
  }

  provisioner "remote-exec" {
    inline = [
      "apt-get update",
      "apt-get install -y nginx"
    ]
  }
}

Step 3: Database Performance Tuning on NVMe

The primary reason to repatriate your database from the cloud to a provider like CoolVDS is disk performance. Shared cloud storage often suffers from "noisy neighbor" latency spikes. With local NVMe storage, we can push MySQL or MariaDB much harder.

However, you must tune `my.cnf` to actually utilize this speed. Default configs assume spinning rust (HDDs).

[mysqld]
# Optimize for NVMe / High IOPS
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_neighbors = 0

# Memory handling
innodb_buffer_pool_size = 4G
innodb_log_file_size = 512M

# Data Integrity
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1

Setting innodb_flush_neighbors = 0 is critical for SSD/NVMe drives. The old seeking logic for rotational drives just wastes CPU cycles here.

The Cost & Performance Breakdown

Let us look at a real-world comparison for a typical Magento shop or a high-traffic media site targeting Norway.

FeaturePublic Cloud (Standard)CoolVDS (KVM NVMe)
vCPUShared / Burstable (Credits)Dedicated Resources
Storage I/O~300-3000 IOPS (Throttled)10,000+ IOPS (NVMe)
Latency to Oslo25ms+< 2ms
Bandwidth CostHigh egress feesGenerous allowances
JurisdictionUS/EU MixedNorway (GDPR/Datatilsynet safe)

Security: The VPN Bridge

Running hybrid means you traverse the public internet. You cannot simply open port 3306 to the world. We use OpenVPN to bridge the gap. While IPsec is standard for enterprise, OpenVPN is often easier to manage for DevOps teams and penetrates NATs more reliably.

Ensure your server config enforces TLS authentication and disables weak ciphers. Here is a hardening snippet for server.conf:

tls-version-min 1.2
cipher AES-256-CBC
auth SHA256
tls-auth ta.key 0
user nobody
group nogroup
persist-key
persist-tun

Conclusion: Regain Control

The cloud is a tool, not a destination. By 2018, as GDPR comes into full force, the "put everything in US-owned clouds" strategy will become a liability for many Norwegian businesses. By leveraging CoolVDS for your core, data-heavy workloads, you gain speed, reduce costs, and simplify compliance, while still using the public cloud for what it is best at—serving static assets globally.

Do not let latency kill your conversion rates. Don't let egress fees kill your budget. Deploy a high-performance NVMe instance on CoolVDS today and test the ping difference yourself.