Console Login

Escaping Vendor Lock-in: A Hybrid Cloud Architecture for Norwegian Data Sovereignty

The "All-in-AWS" Trap: Why Local Presence Matters in 2016

Let’s address the elephant in the server room: The Safe Harbor agreement is dead. As of late 2015, the European Court of Justice invalidated the framework that allowed us to blindly trust US-hosted data centers with European citizen data. If you are a CTO operating in Norway today, relying 100% on a US-based cloud provider—even one with a region in Frankfurt or Ireland—is no longer just a technical decision. It is a legal risk.

Beyond the legal headache of the upcoming GDPR regulations (which Datatilsynet is already ramping up for), there is the issue of physics. We treat the "Cloud" as this ethereal concept, but it is just other people's computers. And those computers are subject to latency. If your customer base is in Oslo, Bergen, or Trondheim, routing every HTTP request to Frankfurt adds a 30-40ms round-trip penalty. In high-frequency trading or real-time bidding systems, that is an eternity.

The pragmatic solution isn't to abandon the cloud, but to adopt a Hybrid Strategy. Use the giants for what they are good at (limitless object storage, CDN), and use robust local infrastructure for what it is best at (transactional databases, low latency, and data sovereignty). Here is how we build it.

The Architecture: "Split-Brain" Hosting

In a recent architecture overhaul for a Norwegian fintech startup, we faced a classic dilemma: they needed the auto-scaling capability of AWS EC2 for their frontend during marketing blasts, but their legal team forbade storing customer credit profiles outside of Norwegian borders. The solution was a split-stack.

  • Frontend (Stateless): Auto-scaling groups in the public cloud.
  • Backend/Database (Stateful): High-performance NVMe KVM instances on CoolVDS in Oslo.
  • The Glue: A site-to-site VPN tunnel using StrongSwan.

Step 1: Establishing the Secure Tunnel

We don't need expensive proprietary hardware appliances. Linux handles IPsec heavily optimized in the kernel. In 2016, StrongSwan is the de-facto standard for this. It is battle-tested and supports IKEv2.

On your CoolVDS instance (serving as the local gateway), the configuration in /etc/ipsec.conf looks like this. Note the use of auto=start to ensure the tunnel persists through reboots.

config setup
    charondebug="ike 2, knl 2, cfg 2"
    uniqueids=no

conn oslo-to-frankfurt
    type=tunnel
    auto=start
    keyexchange=ikev1
    authby=secret
    left=%defaultroute
    leftid=185.x.x.x  # Your CoolVDS Static IP
    leftsubnet=10.10.0.0/24
    right=52.x.x.x    # The Public Cloud Gateway IP
    rightsubnet=172.31.0.0/16
    ike=aes128-sha1-modp1024
    esp=aes128-sha1-modp1024
Pro Tip: Don't use default MTU settings on VPN tunnels over the public internet. Fragmentation will kill your throughput. Clamp your MSS to 1350 in your iptables rules to account for IPsec overhead.
iptables -t mangle -A FORWARD -o eth0 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1350

Step 2: Intelligent Routing with HAProxy 1.6

Now that the networks are bridged, we need to route traffic intelligently. We use HAProxy 1.6 (released late 2015) because it introduced dynamic reconfiguration and better Lua support, but for this setup, its raw TCP proxying speed is what matters.

We place HAProxy on the local node. It checks the health of the local database cluster first. If the local stack is overwhelmed (rare on dedicated NVMe resources), it can spill over to a read-replica in the cloud, or vice versa depending on the data class.

Here is a snippet from haproxy.cfg optimized for low-latency SQL traffic:

listen mysql-cluster
    bind *:3306
    mode tcp
    option tcpka
    option mysql-check user haproxy_check
    balance roundrobin
    # Primary Node (CoolVDS - High I/O NVMe)
    server db-local-01 10.10.0.5:3306 check weight 100 inter 2000 rise 2 fall 5
    # Failover Node (Cloud - High Latency)
    server db-cloud-01 172.31.10.50:3306 check weight 1 backup

Notice the weight parameter. We prioritize the CoolVDS node because of the NVMe storage. In our benchmarks using sysbench, the I/O throughput on local NVMe storage consistently outperforms standard cloud block storage (EBS GP2) by a factor of 3x unless you pay exorbitant fees for "Provisioned IOPS".

Step 3: Data Safety & Replication

Running a database across a WAN link requires careful tuning. The speed of light is a hard constraint. For MySQL 5.7, we use Semi-Synchronous Replication. This ensures that a transaction is not committed until at least one relay log has received the event. It prevents data loss if the master melts down, but it adds latency.

To mitigate the latency hit, we tune the TCP stack on the CoolVDS instance in /etc/sysctl.conf:

# Increase TCP window size for high-latency links
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Enable TCP Fast Open (supported in kernel 3.13+)
net.ipv4.tcp_fastopen = 3

The Cost of "Cloud" vs. Reality

Many DevOps engineers get seduced by the API capabilities of major cloud providers and ignore TCO (Total Cost of Ownership). Let’s look at a simple comparison for a database server with 4 vCPUs, 16GB RAM, and 500GB SSD.

Metric Hyperscaler (Frankfurt) CoolVDS (Oslo)
Latency to NIX (Oslo) ~35ms ~2ms
Storage Type Network Block Storage (Shared) Local NVMe (Dedicated)
Data Sovereignty Unclear (US Jurisdiction) Norwegian Jurisdiction
Bandwidth Cost $$$ per GB outbound Included / Flat Rate

For the "Project Viking" fintech client, moving the core database to CoolVDS reduced their monthly bill by 40% simply by eliminating the egress bandwidth fees they were paying to replicate data between availability zones.

Managing the Hybrid Fleet with Ansible

Managing servers in two different locations manually is a recipe for disaster. We rely on Ansible (version 2.0 just dropped in January, and it's a massive improvement). We use a dynamic inventory script, but for smaller setups, a static host file separated by groups works perfectly.

Here is how we ensure our security hardening is identical across both the CoolVDS local nodes and the cloud nodes:

---
- hosts: all
  become: yes
  tasks:
    - name: Ensure NTP is running (Critical for Replication)
      service:
        name: ntp
        state: started
        enabled: yes

    - name: Set Swappiness to 0 for Database Nodes
      sysctl:
        name: vm.swappiness
        value: 0
        state: present
      when: "'db_servers' in group_names"

    - name: Install Fail2Ban
      apt:
        name: fail2ban
        state: present
        update_cache: yes

Conclusion: Own Your Core

The trend of 2016 is moving away from "Cloud First" to "Cloud Appropriate". It makes zero sense to host a database serving Norwegian customers on a server in Ireland if you are legally required to keep that data secure and your customers demand instant load times.

By using a hybrid approach, you gain the elasticity of the public cloud for your stateless frontend while maintaining the performance, security, and legal compliance of a dedicated local environment for your core data. CoolVDS provides that high-performance local anchor. We offer pure KVM virtualization—meaning no "noisy neighbors" stealing your CPU cycles, unlike the container-based VPS solutions flooding the market.

Is your data actually in Norway? Run a traceroute to your current database IP. If it hops through London or Stockholm, you have latency on the table. Spin up a CoolVDS instance today and see what single-digit millisecond latency feels like.