Escaping the Vendor Lock-In Trap: A Pragmatic Multi-Cloud Strategy for 2017
Letâs be honest with ourselves: the "all-in on AWS" honeymoon is over. For the past three years, CIOs and CTOs across Europe have been aggressively migrating workloads to the public cloud, promised infinite scalability and zero maintenance. The reality hitting us in early 2017 is starkly different. We are seeing opaque billing structures, terrifying egress fees, and a creeping realization that we have traded hardware management for vendor handcuffs.
As a CTO responsible for infrastructure serving the Nordic market, I cannot justify paying premium rates for latency that bounces through Frankfurt or Ireland when my users are in Oslo and Bergen. Furthermore, with the General Data Protection Regulation (GDPR) looming on the horizon for 2018, the legal ground regarding data residency is shifting under our feet. The Privacy Shield framework is currently holding the line, but relying solely on US-owned infrastructure for Norwegian citizen data is becoming a risk profile many of us are no longer comfortable with.
The Hybrid Reality: Core vs. Burst
The most resilient architecture I have deployed this year does not reject the public cloud, but it treats it with deep suspicion. We call this the "Core-Burst Strategy." Your predictable, high-I/O workloadsâyour databases, your core application logic, your master repositoriesâshould reside on bare metal or high-performance KVM instances where you control the noisy neighbors and the bill is flat. You use the hyperscalers (AWS, Google, Azure) strictly for what they are good at: ephemeral burst computing and global content delivery.
In a recent project for a media streaming startup in Trondheim, we faced a monthly AWS bill of $12,000, primarily driven by EC2 IOPS provisioning and bandwidth egress. By moving the core PostgreSQL database and the primary ingest servers to CoolVDS NVMe instances in Oslo, we dropped the baseline cost to $3,500. We kept the front-end auto-scaling groups on AWS to handle traffic spikes during major events.
Technical Implementation: The Network Bridge
The challenge in a hybrid setup is the network fabric. You need a secure, low-latency tunnel between your localized VPS and your public cloud VPC. In 2017, we are moving away from fragile GRE tunnels toward robust OpenVPN or IPsec site-to-site configurations managed by configuration management tools.
Here is how we configure the bridge using Ansible (version 2.2). This playbook ensures that our CoolVDS "Core" can communicate securely with the AWS "Burst" nodes.
---
# hybrid-bridge.yml
- hosts: gateway_nodes
become: yes
vars:
vpn_server_ip: "{{ coolvds_static_ip }}"
aws_vpc_cidr: "10.0.0.0/16"
tasks:
- name: Install StrongSwan for IPsec
apt:
name: strongswan
state: present
update_cache: yes
- name: Configure IPsec Secret
lineinfile:
path: /etc/ipsec.secrets
line: ": PSK \"{{ ipsec_psk_secret }}\""
create: yes
- name: Enable IPv4 forwarding
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_set: yes
state: present
reload: yes
Once the tunnel is established, your application logic needs to be aware of the topology. We use Nginx as a smart load balancer. It prioritizes the local (CoolVDS) backend because the latency is sub-2ms, and only spills over to the cloud upstream when load metrics exceed a threshold.
Below is a snippet of an nginx.conf optimized for this "spillover" logic. Note the use of the backup directive, which is often overlooked.
upstream backend_cluster {
# Primary: CoolVDS NVMe Instances (Low Latency, Fixed Cost)
server 192.168.10.10:80 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.10.11:80 weight=5 max_fails=3 fail_timeout=30s;
# Secondary: AWS Auto-Scaling Group (Higher Latency, Variable Cost)
# Marked as 'backup' so traffic only flows here when primaries are saturated or down
server 10.0.5.100:80 backup;
server 10.0.5.101:80 backup;
}
server {
listen 80;
server_name api.example.no;
location / {
proxy_pass http://backend_cluster;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 2s; # Fail fast if local network issues arise
}
}
The Latency Factor: Oslo vs. The World
Latency is the silent killer of conversion rates. If your target audience is in Norway, hosting your database in Frankfurt adds a physical round-trip tax that no amount of code optimization can remove. We ran mtr (My Traceroute) benchmarks comparing a request from a fiber connection in Oslo to AWS eu-central-1 versus a CoolVDS instance in Oslo.
| Target | Location | Avg Latency (ms) | Jitter |
|---|---|---|---|
| CoolVDS NVMe | Oslo, NO | 1.8 ms | 0.2 ms |
| AWS (eu-central-1) | Frankfurt, DE | 28.4 ms | 4.1 ms |
| DigitalOcean | London, UK | 34.2 ms | 3.5 ms |
For a transactional application, that 26ms difference happens on every TCP handshake, every SSL negotiation, and every database query. It compounds. By keeping the database on CoolVDS, we ensure the heavy lifting happens at wire speed.
Data Sovereignty and the Pre-GDPR Landscape
While the EU General Data Protection Regulation doesn't come into full force until May 2018, the wise architect is preparing now. The Norwegian Data Protection Authority (Datatilsynet) is already signaling stricter enforcement. Placing customer PII (Personally Identifiable Information) on US-controlled servers, even those located in Europe, is becoming a gray area due to the reach of US surveillance laws.
Hosting your primary data store on a Norwegian provider like CoolVDS simplifies this compliance matrix significantly. You know exactly where the physical drive sits. You know the legal jurisdiction is Norway. It provides a "compliance anchor" that allows you to tell your Legal department, "Yes, the master data never leaves the country; only anonymized processing happens in the cloud."
Pro Tip: If you are using MySQL 5.7, ensure you are utilizing the innodb_buffer_pool_size correctly on your dedicated VDS. Unlike shared cloud instances where RAM is oversold, CoolVDS provides dedicated RAM. Set this value to 70-80% of your total memory for optimal performance without fear of the OOM killer.
Performance: The NVMe Difference
In 2017, we are finally seeing NVMe storage become accessible, yet many providers are still pushing SATA SSDs or, heaven forbid, spinning rust (HDD) as "high performance." The IOPS difference is not marginal; it is exponential. On a standard SATA SSD, you might get 500 MB/s sequential read. On NVMe, we are seeing speeds upwards of 3,000 MB/s.
For a database heavy on random reads, this prevents the I/O wait bottleneck that typically forces you to upgrade to a larger, more expensive instance type on public clouds. We validated this by running `fio` benchmarks on a CoolVDS instance:
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=1 --runtime=60 --group_reporting
The results consistently showed IOPS performance 5x higher than similarly priced "General Purpose" cloud volumes. When you control the hardware abstraction layer via KVMâwhich CoolVDS usesâyou strip away the hypervisor overhead that plagues multi-tenant public clouds.
Conclusion: Balance is the Key
The era of blindly deploying to the cloud is ending. The smart money in 2017 is on Hybrid Intelligence. Use the cloud for what it solves (elasticity), but own your data and your performance baseline.
If you are tired of variable bills and variable performance, it is time to ground your infrastructure. Deploy a high-performance, NVMe-backed KVM instance in Norway. Establish your core. Then, and only then, bridge to the cloud.
Ready to stabilize your stack? Deploy a CoolVDS instance today and experience the difference of local, dedicated NVMe power.