The 2 AM Panic: Architecting Failover Strategies for Norwegian Enterprises
It is not a matter of if your hardware will fail. It is a matter of when. In the hosting industry, we have a saying: RAID is not a backup. RAID is a redundancy measure for uptime, not a disaster recovery (DR) plan for data survival. If you are running a critical application targeting the Norwegian market, relying solely on a single datacenter—even one as stable as ours here in Oslo—is professional negligence.
Post-Snowden, the conversation in Europe has shifted aggressively toward data sovereignty. With the Datatilsynet (Norwegian Data Protection Authority) cracking down on where personal data sits, you cannot just dump your encrypted tarballs into an Amazon S3 bucket in Virginia and call it a day. You need a strategy that keeps data within the EEA, preferably on Norwegian soil, while mitigating the risk of a catastrophic localized failure.
This guide ignores the marketing fluff and focuses on the raw configuration required to keep a LAMP stack survivable in October 2014.
The Architecture of Resilience
A robust DR plan in 2014 relies on decoupling your storage from your compute. At CoolVDS, we utilize KVM (Kernel-based Virtual Machine) virtualization. Unlike OpenVZ, where you share a kernel with noisy neighbors, KVM allows you to run your own kernel. This is critical for DR because it allows for block-level replication and snapshotting without host-node interference.
Pro Tip: Always keep your Disaster Recovery site at least 30 kilometers away from your production site. This protects against regional power grid failures or physical disasters, while keeping latency low enough (typically <10ms within Norway) for near-real-time replication.
Phase 1: Database Replication (MySQL 5.6)
For a transactional application, file backups are too slow. You need Master-Slave replication. If the Master node in Oslo melts, the Slave in a secondary location (like our failsafe zones) promotes to Master instantly.
First, configure the Master server. Edit your /etc/my.cnf to enable binary logging. This is the journal of changes the Slave will read.
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = production_db
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
The sync_binlog = 1 flag is vital. It forces MySQL to synchronize the binary log to disk after every transaction. It costs you I/O performance (which is why we push high-performance SSDs on CoolVDS instances), but it guarantees that if power is cut, you lose at most one transaction.
On the Slave server, set a unique ID:
[mysqld]
server-id = 2
relay_log = /var/log/mysql/mysql-relay-bin.log
read_only = 1
Setting read_only = 1 prevents accidental writes to your backup database, which would break the replication chain.
Phase 2: Filesystem Synchronization
Databases are only half the battle. Your /var/www/html assets—user uploads, configuration files, local git repos—must be mirrored. For this, rsync remains the undisputed king. It is efficient, handling only delta transfers.
Do not run rsync manually. Automate it. But do not just stick it in a cron job without locking mechanisms, or you will end up with overlapping processes choking your CPU.
Here is a battle-tested wrapper script for CentOS 6/7 environments:
#!/bin/bash
# /usr/local/bin/dr-sync.sh
LOCKFILE="/var/run/dr-sync.lock"
REMOTE_USER="dr_user"
REMOTE_HOST="10.20.30.40" # Your Secondary CoolVDS Instance IP
SOURCE_DIR="/var/www/html/"
DEST_DIR="/backup/www/html/"
# Check for lock file to prevent overlap
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
echo "Backup already running"
exit
fi
# Create lock
echo $$ > ${LOCKFILE}
# The Meat: Rsync with compression and archive mode
rsync -avz -e "ssh -p 22" --delete \
--exclude 'cache/*' \
--exclude 'logs/*' \
$SOURCE_DIR ${REMOTE_USER}@${REMOTE_HOST}:$DEST_DIR
# Release lock
rm -f ${LOCKFILE}
Add this to your crontab to run every 15 minutes. The --delete flag is dangerous but necessary; it ensures that if a file is deleted on production, it is eventually removed from DR, keeping storage costs predictable.
The Hardware Reality: Why I/O Matters
Disaster recovery involves heavy read/write operations. When you are syncing 50GB of changed data, your disk I/O becomes the bottleneck. On traditional spinning rust (HDD), your server load will spike, causing "iowait" to strangle your CPU. Your website slows down just because you are backing it up.
This is where hardware selection becomes an architectural decision, not just a pricing one. CoolVDS infrastructure is built on storage arrays that prioritize high IOPS (Input/Output Operations Per Second). While the industry is buzzing about the upcoming NVMe standard, even our current enterprise SSD tiers offer vastly superior throughput compared to standard VPS hosting.
Verifying Integrity
A backup you haven't restored is not a backup. It's a hallucination. You must test your recovery process.
Here is a quick snippet to verify your MySQL slave status. If Seconds_Behind_Master is greater than 0, your backup is lagging.
mysql -u root -p -e "SHOW SLAVE STATUS\G" | grep "Seconds_Behind_Master"
Legal Nuances in Norway
Under the Personopplysningsloven (Personal Data Act), you are responsible for the security of your users' data. If you use a US-based provider for your failover, you are stepping into a legal minefield regarding Safe Harbor frameworks. By keeping your primary and DR nodes within CoolVDS's Norwegian infrastructure, you satisfy the requirements of Datatilsynet while maintaining the physical separation needed for safety.
Conclusion
Complex clustering tools like Pacemaker and Corosync exist, but for 90% of use cases in 2014, a solid Master-Slave setup combined with robust rsync scripting is the most reliable path. It removes complexity, and complexity is the enemy of uptime.
Do not wait for a disk controller to die to test this. Spin up a secondary CoolVDS instance today—latency between our zones is negligible—and configure your replication pipeline. If you need help tuning your innodb_buffer_pool_size for the new instance, our support team speaks fluent Linux.