Hardware Fails. Itβs Physics. Deal With It.
If you believe your current hosting provider's "99.99% uptime guarantee" will save your job when a SAN controller melts at 3:00 AM, you haven't been in this industry long enough. Iβve seen datacenters flood. Iβve seen fiber cables cut by confused excavators. And just last October, we saw the legal equivalent of a natural disaster: the CJEU invalidated the Safe Harbor agreement.
It is February 2016. If you are hosting sensitive Norwegian user data on US-controlled infrastructure without a bulletproof Plan B, you aren't just risking downtime; you are risking a visit from Datatilsynet (The Norwegian Data Protection Authority).
This isn't a theoretical essay. This is a technical blueprint for survival. We are going to build a Disaster Recovery (DR) strategy that respects physics, acknowledges latency, and keeps your data strictly within Norwegian borders.
The Architecture of Paranoia
True DR isn't about backups. Backups are for when you accidentally delete a file. DR is for when the server vanishes. To achieve this, we need redundancy at the application, database, and infrastructure levels. We avoid OpenVZ containers for this; their shared kernel architecture is a single point of failure. We use KVM (Kernel-based Virtual Machine) because it provides true hardware abstraction.
At CoolVDS, we see too many developers relying on snapshots alone. Snapshots are great, but they are local. If the host node dies, your snapshot dies with it. You need geographic separation.
Phase 1: The Database (Master-Slave Replication)
Forget multi-master for now; it brings complexity that breaks under panic. We want a solid Master-Slave setup using MySQL 5.6 or MariaDB 10. The goal is asynchronous replication to a hot standby node in a physically separate rack or datacenter.
First, configure the Master server. We need to enable binary logging and set a unique server ID in /etc/my.cnf:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = critical_app_db
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
Pro Tip: Setting
sync_binlog = 1andinnodb_flush_log_at_trx_commit = 1ensures ACID compliance. You will take a slight I/O write penalty, but on CoolVDS Enterprise SSD arrays, this latency is negligible (sub-1ms). Do not trade data integrity for speed.
Next, lock your tables to get a consistent dump position:
FLUSH TABLES WITH READ LOCK;
Check the master status to get the coordinate coordinates:
SHOW MASTER STATUS;
Keep that terminal open. In a new window, dump the data. I prefer mysqldump for smaller DBs (under 10GB) because it's portable.
mysqldump -u root -p --opt critical_app_db > /tmp/critical_db_dump.sql
Unlock the tables immediately after the dump finishes: UNLOCK TABLES;. Now, transfer this dump to your CoolVDS Slave instance using scp.
On the Slave server (standby node), configure /etc/my.cnf:
[mysqld]
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
read_only = 1
Import the dump, then set up the replication link. This is the command that saves your skin:
CHANGE MASTER TO
MASTER_HOST='10.20.30.40',
MASTER_USER='replication_user',
MASTER_PASSWORD='SuperSecurePassword2016!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS= 107;
Start the slave:
START SLAVE;
Verify it works with SHOW SLAVE STATUS\G. Look for Slave_IO_Running: Yes and Slave_SQL_Running: Yes.
Phase 2: The Filesystem (Syncing Assets)
Databases are easy. User-generated content (images, PDFs) is the hard part. Distributed filesystems like GlusterFS are powerful but can be overkill and difficult to debug during an outage. For a robust 2016-era setup, lsyncd (Live Syncing Daemon) combined with rsync is efficient and reliable.
Install lsyncd on your Master web server:
yum install lsyncd
Here is a production-ready configuration that watches your upload directory and pushes changes to the standby server instantly:
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd-status.log",
statusInterval = 20
}
sync {
default.rsync,
source = "/var/www/html/uploads/",
target = "root@10.0.0.2:/var/www/html/uploads/",
rsync = {
compress = true,
archive = true,
verbose = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
}
}
Don't forget to generate SSH keys (ssh-keygen -t rsa) and copy them to the slave (ssh-copy-id) so this runs passwordless.
Phase 3: The Failover (Floating IPs)
When the master goes down, you don't want to be editing DNS records. DNS propagation takes too long. You need a Floating IP (VIP). We use keepalived for this. It uses the VRRP protocol to float an IP address between your two CoolVDS instances.
Install it: yum install keepalived.
Configuration for the Master (/etc/keepalived/keepalived.conf):
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 101
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.50
}
}
On the Backup node, change state MASTER to state BACKUP and priority 101 to priority 100. If the Master stops broadcasting VRRP packets (because it crashed), the Backup node instantly claims the IP 192.168.1.50.
The Sovereignty Factor
Technical recovery is useless if you fail legal compliance. With the Safe Harbor ruling, relying on US-based giants like AWS is becoming a legal minefield for Norwegian businesses. Are you sure your data isn't being replicated to a bucket in Virginia?
This is where local infrastructure matters. CoolVDS infrastructure resides in Oslo. We operate under Norwegian law. Your data doesn't cross the Atlantic unless you tell it to. We offer low latency to the NIX (Norwegian Internet Exchange), meaning your failover isn't just legally safe, it's fast.
Summary
- Isolation: Use KVM, not OpenVZ.
- Data: Async Master-Slave MySQL replication.
- Assets:
lsyncdfor near real-time file mirroring. - Network: VRRP/Keepalived for IP failover.
- Legal: Keep data in Norway to avoid the post-Safe Harbor chaos.
Disaster recovery isn't a product you buy; it's a mindset you adopt. But running that mindset on superior hardware certainly helps. Don't wait for the kernel panic to test your strategy.
Ready to build a redundant architecture? Deploy two KVM instances on CoolVDS today and get sub-2ms latency between nodes.