Console Login

Escaping the Vendor Trap: A Pragmatic Multi-Provider Strategy for 2014

The Cloud is Not a Charity: Reclaiming Control in a Hybrid World

Let’s be honest with ourselves. The rush to "the cloud" over the last three years has left many CTOs with a hangover. We were promised infinite scalability and pay-as-you-go utility. What we got were opaque billing cycles, variable latency that kills user experience, and the terrifying realization that our entire business logic is proprietary to a single vendor's API.

It is May 2014. We have witnessed the fallout of the NSA leaks. We know that data sovereignty isn't just a legal checkbox for Datatilsynet (The Norwegian Data Protection Authority); it is a competitive advantage. If your data sits on a US-controlled hypervisor, do you really own it?

I am not advocating for a return to racking your own 1U servers in a basement. I am advocating for a Multi-Provider Strategy. This is the architecture where you place your stateless logic on elastic clouds but keep your core data and I/O-heavy applications on high-performance, predictable KVM instances—like those we architect at CoolVDS.

The Architecture: "Core and Edge"

The biggest mistake I see in 2014 is treating all workloads the same. A PHP-FPM worker does not need the same resources as a MySQL Master. Public clouds rely heavily on "overselling" CPU cycles. If your neighbor on the physical host decides to mine Bitcoins, your database queries stall. This is the "Steal Time" metric in top, and it is the silent killer of performance.

A pragmatic topology looks like this:

  • The Edge (Stateless): Nginx load balancers and web nodes. These can live anywhere. If one provider goes down, you spin them up elsewhere.
  • The Core (Stateful): Database and Storage. These require dedicated I/O and stable CPU. This is where a premium VPS in Norway makes sense.

Step 1: The Traffic Cop (HAProxy)

To decouple yourself from a single provider's load balancer (like ELB), you need to run your own. HAProxy 1.4 (or the stable 1.5 dev branch) is the industry standard for this. It allows you to route traffic between your CoolVDS instances and backup servers elsewhere.

Here is a battle-tested configuration snippet for /etc/haproxy/haproxy.cfg that handles failover gracefully:

global
    log 127.0.0.1 local0
    maxconn 4096
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option  redispatch
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend http-in
    bind *:80
    default_backend web_servers

backend web_servers
    mode http
    balance roundrobin
    option httpchk HEAD /health_check.php HTTP/1.0
    # The 'check' flag is vital for detecting dead nodes
    server coolvds_node1 10.0.0.5:80 check weight 100
    server backup_cloud 192.168.1.10:80 check weight 1 backup

Notice the weight parameter. We prioritize the hardware we trust (CoolVDS) because we know the I/O is local and fast (SSD-based). We only bleed traffic to the backup provider if the primary fails.

Step 2: Data Persistence & Latency

If your users are in Oslo, serving them from a data center in Virginia is negligence. The speed of light is a hard constraint. Round-trip time (RTT) from Oslo to US-East is ~90ms. RTT to a local Oslo node is <5ms. For a Magento store or a complex Rails app making 20 serial DB queries per page load, that latency compounds into a 2-second delay. That is unacceptable.

However, running a database across providers is tricky. In 2014, multi-master replication is still dangerous for the faint of heart. The pragmatic approach is Master-Slave with SSL.

Pro Tip: Don't just rely on standard replication. Use a VPN tunnel for your replication traffic to avoid exposing port 3306 to the public internet. OpenVPN is robust, but for simple point-to-point, an SSH tunnel is often enough for smaller deployments.

Securing the Link

Before you even think about replication, lock down the network. Using iptables is mandatory. Do not rely on a provider's firewall panel alone.

# Flush existing rules
iptables -F

# Allow loopback
iptables -A INPUT -i lo -j ACCEPT

# Allow SSH (Change port 22 if you are smart)
iptables -A INPUT -p tcp --dport 22 -j ACCEPT

# Allow Web Traffic
iptables -A INPUT -p tcp --dport 80 -j ACCEPT

# Allow MySQL ONLY from your specific web node IP
iptables -A INPUT -p tcp -s 10.0.0.5 --dport 3306 -j ACCEPT

# Drop everything else
iptables -P INPUT DROP

Step 3: The Storage Bottleneck

We are currently seeing a transition in the market. Standard SAS/SATA spinning disks are becoming the bottleneck for everything. While some providers are experimenting with caching layers, nothing beats raw SSD performance.

At CoolVDS, we utilize KVM (Kernel-based Virtual Machine). Unlike OpenVZ, which shares the kernel and can suffer from resource contention, KVM provides full hardware virtualization. When you combine KVM with SSD storage, you get I/O throughput that rivals bare metal. This is critical for database writes.

If you are running MySQL 5.6, ensure your innodb_io_capacity is tuned for SSDs. The default values are designed for rotating disks.

[mysqld]
# Default is often 200, which is too low for SSD
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_neighbors = 0

The Compliance Reality (Norwegian Context)

Post-2013, we cannot ignore the legal landscape. The EU Data Protection Directive (95/46/EC) and the Norwegian Personal Data Act impose strict rules on where personal data resides. While "Safe Harbor" currently allows data transfer to the US, the political climate is shifting. Relying solely on US-based hosting for Norwegian customer data is a risk—both legally and reputationally.

By anchoring your storage in Norway with CoolVDS, you satisfy local compliance requirements while maintaining the ability to use global CDNs for static assets.

Conclusion: Own Your Architecture

Automation tools like Puppet and Chef are making it easier to manage diverse infrastructure, but they cannot fix bad architecture. A strategy that relies on a single provider is a single point of failure—technical and financial.

The sweet spot for 2014 is hybrid: Commodity compute for the front end, premium KVM performance for the back end. It gives you the best price-to-performance ratio and keeps your data under local jurisdiction.

Ready to stop fighting for CPU cycles? Deploy a high-performance SSD instance in our Oslo facility. Test the latency yourself.