Console Login

The Safe Harbor Fallout: Architecting a Hybrid Cloud Strategy in Norway (2015 Edition)

The Safe Harbor Fallout: Architecting a Hybrid Cloud Strategy in Norway

October 2015 changed everything for European CTOs. When the European Court of Justice invalidated the Safe Harbor agreement (Schrems vs. Facebook), the comforting illusion that we could blindly store Norwegian user data on US-controlled servers evaporated. If you are responsible for infrastructure in Oslo or Bergen, you are likely staring at a compliance roadmap that just got significantly more expensive.

But legal compliance is only half the battle. The other half is physics. While Amazon Web Services (AWS) offers infinite scale in Frankfurt or Dublin, the speed of light remains constant. Round-trip latency from Oslo to Frankfurt usually sits around 25-30ms. For a high-frequency trading application or a real-time bidding server, that is an eternity.

The solution isn't to abandon the cloud; it is to stop treating it as a single vendor utility. The pragmatic approach for 2016 is the Hybrid Cloud: keeping your sensitive data and latency-critical logic on local, high-performance iron (like CoolVDS), while bursting stateless compute to the public cloud when traffic spikes.

The Architecture: Core vs. Burst

We see too many engineering teams trying to stretch a single VLAN across the Atlantic. It rarely works well. The VPN overhead alone kills your throughput. Instead, treat your infrastructure as two distinct zones:

  1. The Core (Norway): Holds the database master, customer PII (Personally Identifiable Information), and session state. This ensures data sovereignty and connects directly to the Norwegian Internet Exchange (NIX) for sub-2ms latency to local users.
  2. The Burst (Public Cloud): Stateless frontend workers, image processing queues, and dev/stage environments. These can be spun up and down based on demand.
Pro Tip: Do not rely on public cloud "dedicated" instances for your database unless you have money to burn. A generic vCPU often suffers from 20-30% steal time during peak hours. On a platform like CoolVDS, utilizing KVM virtualization ensures your CPU cycles are actually yours.

Orchestration with Ansible

Managing two environments requires a unified tool. Puppet and Chef are powerful but complex. In 2015, Ansible has emerged as the clear winner for hybrid setups because it is agentless. You don't need to install a daemon on your cloud instances; you just need SSH.

Here is how you define a hybrid inventory in /etc/ansible/hosts to manage both your local CoolVDS instances and AWS nodes simultaneously:

[core-norway]
db-master-01 ansible_ssh_host=185.x.x.x
redis-01 ansible_ssh_host=185.x.x.y

[burst-cloud]
frontend-aws-01 ansible_ssh_host=54.x.x.x
frontend-aws-02 ansible_ssh_host=54.x.x.y

[all:vars]
ansible_python_interpreter=/usr/bin/python2.7

With this simple text file, you can push security patches to your local database and your cloud web servers with a single command:

ansible all -m yum -a "name=openssl state=latest" -u root

The Network Glue: HAProxy

To make this work, you need intelligent load balancing. We recommend HAProxy 1.5. It is robust, free, and handles SSL termination better than ELB in many custom scenarios. You should place an HAProxy node at the edge of your Norwegian infrastructure.

If your local frontend servers get overwhelmed, HAProxy can spill traffic over to the cloud. Here is a snippet of an haproxy.cfg configured for this "overflow" behavior:

backend web_farm
    mode http
    balance roundrobin
    option httpchk HEAD /health HTTP/1.0
    
    # Primary Local Servers (CoolVDS - Low Latency)
    server web01 10.10.1.10:80 check weight 100
    server web02 10.10.1.11:80 check weight 100

    # Backup Cloud Servers (AWS - Higher Latency, only used if local is full)
    server cloud01 54.21.x.x:80 check backup
    server cloud02 54.21.x.y:80 check backup

The backup directive is critical here. Traffic stays in Norway (fast, cheap bandwidth) until your local servers fail health checks, at which point it seamlessly routes to the cloud.

Data Sovereignty: The Database Layer

This is where the Safe Harbor ruling hits hard. You should configure your database to ensure writes happen in Norway. If you are running MySQL 5.6 (which you should be, or MariaDB 10), use row-based replication.

Optimize your `my.cnf` to prioritize data safety on the master node. Unlike the cloud where ephemeral storage can vanish, CoolVDS instances provide persistent NVMe storage. We can afford to be stricter with consistency:

[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW

# Reliability Settings
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1

# Performance on NVMe
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_method = O_DIRECT

The `innodb_io_capacity` settings above are specific to SSD/NVMe storage. If you try these settings on a standard rotational HDD or a throttled cloud volume, your IO wait will spike, and the server will crawl.

Security: Tunnelling Between Clouds

Never expose your database port (3306) to the public internet. Since we are avoiding expensive hardware VPN appliances, use an SSH tunnel with autossh for a persistent, encrypted link between your cloud frontend and your local backend.

Create a systemd service file (available in CentOS 7) to keep the tunnel alive:

[Unit]
Description=AutoSSH Tunnel for MySQL
After=network.target

[Service]
Environment="AUTOSSH_GATETIME=0"
ExecStart=/usr/bin/autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -N -L 3306:127.0.0.1:3306 user@185.x.x.x -i /home/user/.ssh/id_rsa

[Install]
WantedBy=multi-user.target

This forwards the local port 3306 on the cloud web server to the CoolVDS database server securely.

Benchmarking the Difference

Theory is fine, but IOPS (Input/Output Operations Per Second) pay the bills. When you select a "Standard" instance on major cloud providers, you often get capped at 300-500 IOPS unless you pay for "Provisioned IOPS."

We ran a standard fio random write test (4k block size) on a CoolVDS NVMe instance versus a standard cloud SSD volume:

fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=1 --runtime=60 --group_reporting

The Results:

Metric Standard Public Cloud CoolVDS (Local NVMe)
IOPS ~450 ~12,500
Latency 3ms - 15ms (Jittery) 0.1ms (Consistent)
Monthly Cost $$$ (Bandwidth fees) $ (Flat rate)

The Verdict

The cloud is a powerful tool, but in late 2015, the legal and technical landscape demands a smarter approach than "all-in on AWS." The invalidation of Safe Harbor is a wake-up call to bring your data home.

By leveraging a hybrid architecture, you satisfy the lawyers by keeping PII in Norway, you satisfy the users by cutting latency, and you satisfy the CFO by reducing bandwidth egress fees. Use the cloud for what it's good at—elasticity—and use CoolVDS for what we are good at: raw, unthrottled performance and stability.

Ready to test your local performance? Don't just guess. Spin up a KVM instance on CoolVDS today and run the fio command above. The numbers speak for themselves.