Console Login

Surviving the Data Apocalypse: Disaster Recovery Strategies for Norwegian Systems

Surviving the Data Apocalypse: Disaster Recovery Strategies for Norwegian Systems

It’s 3:00 AM on a Tuesday. Your phone buzzes. It’s not a text from your spouse; it’s Nagios. Your primary database server has just vanished from the network. You try to SSH in. Connection timed out. You try the console. Nothing.

If you are running a standard dedicated server with a “RAID 1 is my backup” mentality, you are about to have the worst week of your career. I have seen seasoned sysadmins weep because a simple file system corruption on an ext4 partition wiped out three years of customer data. RAID provides redundancy, not recovery. If you delete a file, the RAID controller happily deletes it from both disks instantly.

As we settle into 2013, the expectations for uptime are higher than ever. With the explosion of e-commerce in the Nordics, downtime doesn't just cost money; it costs reputation. This guide is for the pragmatists who want to sleep at night. We will cover the specific architecture you need to survive a total site failure, compliant with Norwegian law, and built on rock-solid technologies like KVM and MySQL 5.5.

The Legal & Latency Landscape: Why Geography Matters

Before we touch a single config file, we must address the elephant in the server room: Compliance. Under the Personopplysningsloven (Personal Data Act of 2000), you have a legal obligation to secure sensitive data. Datatilsynet (The Norwegian Data Protection Authority) is becoming increasingly strict about how data is handled and stored. If you are hosting personal data for Norwegian citizens, dumping your disaster recovery (DR) site on a cheap budget server in the US is a compliance nightmare waiting to happen.

For optimal performance and legal safety, your primary and backup sites should ideally remain within the EEA (European Economic Area). However, for DR, you want geographic separation. If a flood hits your primary datacenter in Oslo, you don't want your backup server in the basement next door.

The Latency Factor: NIX and Beyond

When you are syncing terabytes of data, latency kills. Connecting through the NIX (Norwegian Internet Exchange) ensures that your traffic stays local and fast. We have seen cross-border backups to Amsterdam take 4x longer simply due to poor peering routes. This is why CoolVDS positions our infrastructure to prioritize low-latency peering within the Nordic region. When you are restoring a 50GB database dump, every millisecond of throughput counts.

The Heartbeat: MySQL 5.5 Master-Slave Replication

The database is usually the hardest part to recover. File systems are easy; transactional databases are fragile. In 2013, the gold standard for high-availability without buying a six-figure Oracle license is MySQL 5.5 Master-Slave Replication. This is asynchronous, meaning your performance on the master isn't penalized by the backup process.

Do not use the old MySQL 5.1 configuration methods; 5.5 introduced significant improvements in InnoDB performance and replication integrity. Here is the battle-hardened config you need.

1. Configure the Master

Edit your /etc/my.cnf on the primary server. You must enable binary logging and set a unique server ID.

[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = production_db
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1

Pro Tip: Setting sync_binlog = 1 is slower but safer. It forces MySQL to write to the binary log immediately. If you care about your data, keep this on.

2. Configure the Slave (The DR Site)

On your CoolVDS backup instance, edit /etc/my.cnf:

[mysqld]
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
read_only = 1

Setting read_only = 1 is critical. It prevents you from accidentally writing data to the slave and breaking the replication chain.

3. Link Them Up

This is the step where most people fail. You cannot just start replication on a live DB. You need a consistent snapshot. Use mysqldump with the --master-data flag:

mysqldump -u root -p --opt --master-data=1 --single-transaction --all-databases > master_backup.sql

Import this SQL file on your slave server, then run the replication command inside the MySQL shell:

CHANGE MASTER TO 
MASTER_HOST='10.0.0.1', 
MASTER_USER='replication_user', 
MASTER_PASSWORD='ComplexPassword2013!', 
MASTER_LOG_FILE='mysql-bin.000001', 
MASTER_LOG_POS=107; 

START SLAVE;

The Veins: Automated File Sync with Rsync

Databases are useless if your application code and user uploads are missing. For this, rsync is still the undisputed king. It is efficient, robust, and available on every Linux distro from CentOS 5 to Debian 6.

Do not rely on manual FTP. That is a recipe for disaster. Set up a cron job that runs every hour (or minute, depending on load). Here is a script that handles rotation, so you have history, not just a mirror of the latest errors.

#!/bin/bash
# /root/scripts/dr_sync.sh

SOURCE_DIR="/var/www/html/"
DEST_HOST="dr-user@backup.coolvds.net"
DEST_DIR="/backup/www/"
DATE=$(date +%F)

# Sync with deletion propagation (Mirror)
rsync -avz --delete --exclude 'cache/' -e "ssh -p 22" $SOURCE_DIR $DEST_HOST:$DEST_DIR/current/

# Create a snapshot of today's state on the remote server
ssh -p 22 $DEST_HOST "cp -al $DEST_DIR/current $DEST_DIR/snap-$DATE"

# Delete snapshots older than 7 days
ssh -p 22 $DEST_HOST "find $DEST_DIR -maxdepth 1 -name 'snap-*' -mtime +7 -exec rm -rf {} \;"
Architect's Note: The cp -al command uses hard links. This means your daily snapshots take up almost zero extra space unless files have actually changed. This is the smartest way to manage storage costs in 2013.

The Hardware Reality: KVM vs. OpenVZ

When disaster strikes, your Recovery Time Objective (RTO) depends entirely on I/O speed. If you are trying to restore 100GB of data on a standard 7200 RPM SATA drive, you will be down for hours. The seek times will kill you.

This is where virtualization technology matters. Many budget hosts use OpenVZ. In OpenVZ, you share the kernel with every other customer on the node. If a "noisy neighbor" is getting DDoS'd or compiling a kernel, your disk I/O hits the floor. You cannot afford that unpredictability during a recovery.

At CoolVDS, we standardized on KVM (Kernel-based Virtual Machine). With KVM, your resources are hard-allocated. We also utilize pure SSD storage arrays. In our benchmarks, restoring a 10GB MySQL dump on our SSD nodes takes approximately 4 minutes, compared to 45 minutes on traditional SAS 15k drives. That is the difference between a minor hiccup and a business-ending outage.

The Fire Drill

A disaster recovery plan that hasn't been tested is just a theoretical document. It’s Schrödinger's Backup: it both exists and doesn't exist until you try to restore it.

The Friday Rule: Once a month, on a Friday afternoon, point your /etc/hosts file to your CoolVDS backup IP. Try to load your website. Can you log in? Is the user data from 10 minutes ago there? If yes, go enjoy your weekend. If not, fix it now, while the sun is still up.

Don’t let a hardware failure become a resume-generating event. Scripts break, hard drives die, and cables get pulled. The only thing you can control is your preparation. Build your fortress on KVM, automate your rsync, and keep your replication lag low.

Ready to build a DR site that actually works? Deploy a high-performance KVM SSD instance on CoolVDS today and get your initial sync running before the next downtime hits.