The "All-In" Cloud Mistake: Why Smart Architects Diversify
It has become the standard advice in boardroom meetings across Oslo: "Just put it on AWS." While the hyperscalers offer incredible toolsets, going "all-in" on a single US-based provider is a strategic gamble that many Norwegian CTOs are losing. I have seen the invoices, and I have seen the latency graphs. When your entire infrastructure lives in a datacenter in Ireland or Frankfurt, you are not just battling physics (latency); you are battling unpredictable variable costs and increasingly complex data sovereignty concerns.
In 2015, the conversation shouldn't be about rejecting the cloud, but about federating it. We need a strategy that combines the raw, cost-effective power of local infrastructure with the elasticity of global giants. This is not just theory; it is how we survive outages and keep the Norwegian Data Protection Authority (Datatilsynet) happy.
The Architecture of Independence
Let’s look at a real-world scenario. A SaaS platform I recently audited was running entirely on EC2 instances in eu-west-1. Their bill was fluctuating wildly due to bandwidth egress fees, and their Norwegian users were experiencing 45ms latency on dynamic content. Acceptable? Maybe. Optimal? Absolutely not.
We migrated them to a Hybrid Multi-Provider model. The concept is simple: The Core resides in Norway, and The Burst lives in the public cloud.
1. The Core: Stability and Sovereignty
Your database master, your core application logic, and your customer data should reside on high-performance, fixed-cost infrastructure within your primary market. Using a KVM-based VPS in Oslo (like those we provision at CoolVDS) drops your latency to local users from 40ms down to 2-3ms via NIX (Norwegian Internet Exchange).
Furthermore, with the current scrutiny on the Safe Harbor agreement, keeping Personal Identifiable Information (PII) on Norwegian soil, governed by the Personopplysningsloven, is the only way to sleep soundly at night.
2. The Burst: Elasticity
We treat the large public clouds not as a home, but as a utility. We configure auto-scaling groups to spin up frontend nodes only when traffic spikes. These nodes connect back to the Core via a secured VPN tunnel.
Technical Implementation: The Glue
Linking a CoolVDS instance in Oslo with an external provider requires a robust networking layer. In 2015, we aren't relying on proprietary "Direct Connect" circuits for everything; we use open-source tools to build our own mesh.
The Network Layer: OpenVPN / Tinc
Do not expose your database port to the public internet. Instead, establish a site-to-site VPN. Here is a standard production configuration for server.conf in OpenVPN to ensure your tunnel is persistent and handles fragmentation correctly:
proto udp
dev tun
topology subnet
server 10.8.0.0 255.255.255.0
keepalive 10 120
cipher AES-256-CBC
comp-lzo
persist-key
persist-tun
verb 3
# Optimize for high-throughput
sndbuf 393216
rcvbuf 393216
push "sndbuf 393216"
push "rcvbuf 393216"
Load Balancing with HAProxy 1.5
HAProxy is the unsung hero of multi-provider setups. We place HAProxy nodes at the edge. They check the health of the local "Core" nodes first. If—and only if—the local nodes are saturated, requests are routed to the more expensive "Burst" nodes.
backend app_nodes
mode http
balance roundrobin
option httpchk HEAD /health HTTP/1.1\r\nHost:\ www.example.com
# Primary Local Nodes (CoolVDS - Fixed Cost)
server core-1 10.10.1.5:80 check weight 100
server core-2 10.10.1.6:80 check weight 100
# Backup Cloud Nodes (Variable Cost - Only used when needed)
server burst-1 192.168.5.10:80 check backup
Pro Tip: Use the backup directive in HAProxy. This ensures you literally pay zero bandwidth egress to the external cloud provider until your local capacity is fully utilized. This simple line saved one client 15,000 NOK in a single month.
The Data Problem: Replication Latency
The biggest challenge in a distributed setup is the database. Physics is cruel. You cannot write to a master in Oslo and expect a slave in Frankfurt to have that data instantly.
For this, we rely on Asynchronous Replication with MySQL 5.6 or MariaDB 10. The "Core" in Oslo holds the Master. It handles all WRITE operations. The remote nodes utilize local Read Replicas. Yes, there is a replication lag of 30-50ms. Your application logic must account for this (e.g., "Eventual Consistency").
However, for 90% of business applications, this is acceptable. The trade-off gives you data sovereignty and massive read-scalability.
Why Bare Metal Performance Matters
When you split your architecture, the performance of the "Core" is paramount. You cannot afford "noisy neighbors" stealing CPU cycles when your database is the single source of truth. This is why we argue against standard container-based hosting for the database layer.
At CoolVDS, we utilize KVM (Kernel-based Virtual Machine) virtualization. Unlike OpenVZ, KVM provides true hardware isolation. We pair this with pure SSD storage arrays (no spinning rust for databases, please). When you run iostat -x 1, you need to see wait times near zero. If your current VPS provider is showing %iowait above 5, they are overselling their storage backend.
Conclusion: Own Your Platform
A multi-provider strategy is not about complexity; it is about insurance. It protects you from price hikes, it protects you from US-centric outages, and it ensures your customer data remains under Norwegian jurisdiction.
Building this architecture requires a solid foundation. You need a host that gives you root access, raw performance, and low-latency connectivity to the Norwegian backbone. Do not let vendor lock-in dictate your roadmap.
Ready to build your Core? Deploy a KVM instance on CoolVDS today and get < 2ms latency to the Oslo exchange.