The Hybrid Cloud Myth: Why Real Redundancy Requires Local Iron
Let’s be honest. If you are running your entire Norwegian e-commerce stack on a single availability zone in AWS us-east-1, you are not a systems architect. You are a gambler.
Latency matters. Physics is unforgiving. A request traveling from Oslo to Virginia and back takes roughly 90-110ms in pure fiber time, ignoring processing overhead. For a modern application making dozens of serial database calls, that latency compounds into a sluggish user experience that kills conversion rates. Furthermore, the legal landscape regarding data sovereignty is shifting. With the EU debating strict reforms to replace the 95/46/EC Directive, relying entirely on US-hosted infrastructure is becoming a liability for Norwegian businesses handling sensitive customer data.
In this guide, we are going to build a pragmatic, high-performance hybrid infrastructure. We will combine the raw I/O power of local KVM-based VPS instances (like those we provision at CoolVDS) with the elasticity of public cloud for failover. This is the architecture used by battle-hardened CTOs who care about TCO and uptime.
The Architecture: "Core-Edge" Split
The most robust strategy available in 2015 isn't about putting everything in the cloud; it's about Tiered Locality.
- The Core (CoolVDS - Oslo/Europe): Your database master and primary application servers. This ensures sub-20ms latency for your Nordic user base and compliance with the Norwegian Personal Data Act (Personopplysningsloven).
- The Failover (Public Cloud): A standby environment used strictly for disaster recovery or bursting traffic spikes.
The Data Layer: Synchronous Replication
The hardest part of multi-provider hosting is state. In 2015, the gold standard for high-availability MySQL is Galera Cluster (via MariaDB or Percona). Unlike standard MySQL master-slave replication, which can suffer from lag, Galera offers synchronous replication. If a node acknowledges a write, it’s safe.
Here is a production-ready my.cnf snippet for a 3-node Galera cluster running on CentOS 7. Note the tuning for InnoDB buffer pools, which is critical when running on the high-performance SSD storage provided by CoolVDS.
[mysqld]
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
# Cluster Connection
# We list IPs across our private CoolVDS LAN and the VPN tunnel to the failover site
wsrep_cluster_address="gcomm://192.168.10.1,192.168.10.2,10.8.0.5"
wsrep_cluster_name="coolvds_hybrid_cluster"
wsrep_node_address="192.168.10.1"
# InnoDB Tuning for SSD/SAS
# Don't leave this at default! Set to 70-80% of RAM.
innodb_buffer_pool_size=4G
innodb_flush_log_at_trx_commit=2
innodb_io_capacity=2000
innodb_read_io_threads=8
innodb_write_io_threads=8
# Avoid split-brain situations
wsrep_provider_options="gcache.size=512M; socket.ssl_key=/etc/pki/galera/galera-key.pem; socket.ssl_cert=/etc/pki/galera/galera-cert.pem"
Pro Tip: Never run a Galera cluster over the public internet without encryption. The replication traffic is unencrypted by default. Use a Tinc or OpenVPN tunnel between your CoolVDS datacenter and your secondary location. Latency tolerance for Galera is decent, but packet loss will kill your cluster performance. This is why our premium peering at NIX (Norwegian Internet Exchange) is vital.
The Traffic Layer: HAProxy with Keepalived
To route traffic intelligently, we don't rely on DNS round-robin alone (browser caching makes it unreliable for fast failover). We use HAProxy 1.5. It’s lightweight, robust, and now supports SSL termination natively, removing the need for an Nginx frontend just for HTTPS.
We deploy HAProxy in a pair using Keepalived for VIP (Virtual IP) failover. If the primary load balancer takes a hit, the VIP floats to the backup in less than a second.
# /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
195.xxx.xxx.xxx # Your CoolVDS Reserved IP
}
track_script {
chk_haproxy
}
}
With this setup, your application is resilient. But hardware is only half the battle. Configuration management is what separates the professionals from the amateurs.
Automating Deployment with Ansible
Gone are the days of manual SSH loops. If you aren't using Puppet, Chef, or (our favorite) Ansible, you are wasting billable hours. Ansible 1.8 just dropped recently, and its agentless architecture is perfect for managing a hybrid fleet of CoolVDS instances and remote backups.
Here is a simple playbook task to ensure your Nginx web servers are configured strictly for performance, stripping unnecessary modules:
---
- hosts: webservers
vars:
worker_processes: "{{ ansible_processor_vcpus }}"
tasks:
- name: Configure Nginx Main
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: 0644
notify: restart nginx
- name: Ensure Gzip is enabled for static assets
lineinfile:
dest: /etc/nginx/nginx.conf
regexp: "gzip on;"
line: "gzip on;"
state: present
handlers:
- name: restart nginx
service: name=nginx state=restarted
Why KVM Beats Containers for Databases
There is a lot of noise in the industry right now about Docker (version 1.4 looks promising) and containerization. While containers are excellent for stateless application code, we firmly believe that persistent data belongs on fully isolated virtualization.
At CoolVDS, we use KVM (Kernel-based Virtual Machine) exclusively. Unlike OpenVZ, which shares the host kernel, KVM provides true hardware abstraction. This allows us to pass through specific CPU flags and guarantee memory allocation. When you run a database on a shared kernel container, a "noisy neighbor" can starve your I/O operations, causing query latencies to spike unpredictably.
For high-performance scenarios, we recommend the deadline or noop I/O scheduler inside your VM when running on our enterprise SSD arrays, as the hypervisor handles the physical sorting.
| Feature | Container (LXC/OpenVZ) | KVM (CoolVDS Standard) |
|---|---|---|
| Kernel Isolation | Shared | Dedicated |
| Custom Kernel Modules | No | Yes |
| IOPS Consistency | Variable | Guaranteed |
| Security | Process Separation | Hardware Virtualization |
The Economic Reality
Public cloud billing is complex. You pay for compute, storage, provisioned IOPS, and—the silent killer—egress bandwidth. For a media-heavy site serving Norwegian traffic, egress fees can double your monthly bill.
CoolVDS offers a predictable flat-rate model. You get your dedicated RAM and CPU cores, and a generous bandwidth allocation. By keeping your steady-state workload on our infrastructure and only using the public cloud for bursting, you can reduce your Total Cost of Ownership (TCO) by up to 40%.
Don't let latency or legal ambiguity dictate your infrastructure strategy. Build a fortress in Norway, and use the cloud as your moat.
Ready to lower your latency to Oslo? Deploy a high-performance KVM instance on CoolVDS today and see the difference dedicated resources make.