Console Login

Escaping the Vendor Trap: A Pragmatic Hybrid Cloud Architecture for Norwegian Enterprises

Multi-Cloud Reality Check: The "Norwegian Fortress" Strategy

Let's be honest: the "All-in-Cloud" dream sold by Amazon and Google is starting to show cracks. While the agility of EC2 is undeniable, the TCO (Total Cost of Ownership) creeps up on you like a silent process killer. More importantly, in a post-Snowden world (it's been a year, and we are still reeling), relying solely on US-hosted infrastructure is a legal nightmare waiting to happen for Norwegian businesses. Data sovereignty is no longer a buzzword; it is a business requirement.

As a CTO, your job isn't just to keep the lights on; it's to ensure you aren't held hostage by a single vendor's API or pricing model. The solution isn't to abandon the cloud, but to adopt a Hybrid Strategy. We keep the elastic compute where it's cheap, and the critical data where it's safe—right here in Norway.

In this guide, I will walk you through a battle-tested architecture we recently deployed for a finance client in Oslo. We utilized a mix of commodity cloud instances for frontend traffic and CoolVDS high-performance KVM instances for the backend data store, linked via secure tunnels.

The Architecture: Split-Stack Deployment

The concept is simple: Stateless on the Edge, Stateful at the Core.

  • The Edge: Nginx/Apache web servers scaling on Amazon EC2 or DigitalOcean. They handle the public traffic.
  • The Core: Database (MySQL/MariaDB) and Application Logic hosted on CoolVDS in Norway.
  • The Bridge: A mesh of OpenVPN tunnels gluing it all together.

Why this setup? Latency. If your users are in Norway, fetching data from `us-east-1` or even Ireland is wasteful. By hosting the database on CoolVDS, you leverage the Norwegian Internet Exchange (NIX) peering, dropping latency to sub-5ms for local users. Plus, your customer data physically resides on disks in Oslo, satisfying Datatilsynet requirements.

Step 1: The Secure Bridge (OpenVPN)

Don't rely on public IPs for database traffic. We need a private network. While IPsec is standard, OpenVPN on Linux is far easier to manage and debug. We set up the CoolVDS instance as the Server and the cloud nodes as Clients.

Here is a robust server.conf for CentOS 6/7 (since CentOS 7 just dropped this month, make sure you check your firewalld settings!):

port 1194
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "route 10.8.0.0 255.255.255.0"
keepalive 10 120
tls-auth ta.key 0
cipher AES-256-CBC   # High security for 2014 standards
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status openvpn-status.log
verb 3
Pro Tip: Always use proto udp for VPNs carrying TCP traffic (like HTTP or MySQL). TCP-over-TCP leads to the "TCP Meltdown" effect where packet loss causes exponential backoff layers to fight each other.

Step 2: The Load Balancer (HAProxy 1.5)

HAProxy 1.5 was finally released as stable last month (June 2014), and it brings native SSL support! This is huge. We no longer need Stunnel or Nginx in front of HAProxy just for termination.

We deploy HAProxy on the CoolVDS side to distribute requests to our backend app servers. It acts as the shield. If the cloud frontends get DDoS'd, your core data remains isolated.

global
    log 127.0.0.1 local0
    maxconn 4096
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend http_front
    bind *:80
    # ACL to block common bot patterns
    acl is_bot hdr_sub(User-Agent) -i curl wget
    http-request deny if is_bot
    default_backend app_servers

backend app_servers
    balance roundrobin
    # Checks are crucial. If a node dies, remove it instantly.
    option httpchk GET /health_check.php
    server web01 10.8.0.2:80 check
    server web02 10.8.0.3:80 check
    server local_backup 127.0.0.1:8080 check backup

Step 3: Database Consistency (MariaDB Galera)

Replication across a WAN (Wide Area Network) is risky. Standard MySQL async replication can lead to data loss if the master fails. In 2014, the best solution for multi-master setups is MariaDB Galera Cluster.

We run a 3-node cluster. Two nodes on CoolVDS (for quorum and speed) and one arbitrator node elsewhere. This ensures that writes are confirmed and consistent. The latency between CoolVDS nodes is negligible (often on the same virtual switch), ensuring high transaction throughput.

Check your server.cnf configuration. The innodb_flush_log_at_trx_commit setting is the trade-off between ACID compliance and raw speed. On CoolVDS SSD instances, we can afford to be strict.

[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_doublewrite=1
query_cache_size=0
innodb_buffer_pool_size=4G  # Adjust based on your VPS RAM

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.8.0.1,10.8.0.5,10.8.0.6"
wsrep_cluster_name="norway_cluster"
wsrep_node_address="10.8.0.1"

Why KVM Matches the "Pragmatic" Philosophy

You might ask, why not just use OpenVZ containers? They are cheaper. The answer is isolation. In a shared kernel environment (OpenVZ), a neighbor under DDoS attack can affect your system's stability. KVM (Kernel-based Virtual Machine) provides true hardware virtualization.

At CoolVDS, we enforce KVM usage for all business-critical tiers. When you are running a database that requires consistent I/O operations per second (IOPS), you cannot afford the "noisy neighbor" effect common in budget hosting. We utilize pure SSD RAID-10 arrays. While they aren't the Fusion-io PCIe cards you see in supercomputers, modern enterprise SSDs saturate the SATA3 bus, removing the I/O bottleneck that plagued VPS hosting just two years ago.

Performance Benchmark: The "dd" Test

It's crude, but it works for a quick sanity check of your disk write speed. Run this on your current host versus a CoolVDS instance:

dd if=/dev/zero of=testfile bs=1G count=1 oflag=dsync

If you see anything less than 200 MB/s, your database will choke under load. On our recent CoolVDS deployment, we consistently hit 450+ MB/s, which is near the theoretical limit of the interface.

Local Nuances: The Norwegian Advantage

Finally, let's talk about the law. Under the EU Data Protection Directive, transferring personal data outside the EEA is complex. While "Safe Harbor" currently allows transfers to the US, political pressure is mounting, and many legal experts predict it won't last forever. Hosting your primary user database on CoolVDS in Norway bypasses this headache entirely. Your data stays on Norwegian soil, protected by Norwegian law, powered by clean Norwegian hydroelectricity.

Stability, Speed, Sovereignty. That is the pragmatic choice.

Ready to secure your infrastructure? Don't wait for a compliance audit to force your hand. Deploy a KVM SSD instance on CoolVDS today and build your fortress.