Console Login

RAID Is Not A Backup: The 2012 Guide to Automating Disaster Recovery

RAID Is Not A Backup: The 2012 Guide to Automating Disaster Recovery

There are two types of system administrators: those who have lost data, and those who are about to. I learned this the hard way in 2009, staring at a degraded RAID 5 array that decided to fail a second drive during a rebuild. The client lost three days of orders. I lost a lot of sleep. Since then, my philosophy has been brutal and simple: RAID provides uptime; backups provide survival.

In the Nordic hosting market, we often get complacent because our infrastructure is solid. We have stable power grids and excellent connectivity via NIX (Norwegian Internet Exchange). But a lightning-fast connection to Oslo won't save you when a junior developer runs rm -rf / var/www/html (note the accidental space) on your production VPS. If you are relying on manual tarballs or your provider’s “weekly snapshot,” you are playing Russian Roulette with your business.

The Anatomy of a "Set and Forget" Strategy

Automation is not a luxury; it is the baseline for professional operations. In 2012, we don't need expensive enterprise software licenses to secure a Linux server. We need Bash, cron, and a healthy paranoia. The goal is to create a rotation that keeps daily dumps for a week, and weekly dumps for a month.

Here is the reality of the toolchain we are working with on a standard CentOS 6 or Debian Squeeze install:

  • Bash: The glue.
  • mysqldump: For database consistency.
  • tar & gzip: For file compression.
  • rsync: For offsite transport.

Step 1: The Database Dump

Most scripts fail because they ignore database locking. If you copy the raw /var/lib/mysql directory while the server is running, you will get corrupt tables. For InnoDB tables (which you should be using over MyISAM for data integrity), you must use the --single-transaction flag. This ensures a consistent snapshot without locking the entire web application.

mysqldump -u root -p'YourComplexPassword' --all-databases --single-transaction --quick --lock-tables=false > /backup/db/full_dump_$(date +%F).sql
Pro Tip: Never put your root password directly in the command line if you can avoid it, as it shows up in ps aux process lists. Create a .my.cnf file in your home directory with chmod 600 permissions containing your credentials.

Step 2: The Filesystem Archive

Web files are static. Databases are dynamic. Treat them differently. For your web root, we use tar. We need to preserve permissions and ownership, especially if running suEXEC or similar security wrappers.

#!/bin/bash
# Standard Backup Script v1.2
BACKUP_DIR="/backup"
DATE=$(date +%Y-%m-%d)

# Ensure directory exists
mkdir -p $BACKUP_DIR

# Archive web root
echo "Starting file backup..."
tar -czf $BACKUP_DIR/web_backup_$DATE.tar.gz /var/www/vhosts

# Remove backups older than 7 days to save space
find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -exec rm {} \;

The Crucial Step: Offsite Transport

If your backup sits on the same physical disk as your production data, you do not have a backup. You have a copy. In the event of a total filesystem corruption or hardware failure, both are gone. You need to move this data off-server immediately.

We use rsync over SSH. It is bandwidth-efficient, secure, and ubiquitous. For CoolVDS customers, we recommend spinning up a small, secondary storage VPS in a different datacenter zone. This keeps latency low (essential for large transfers) but separates the failure domains.

First, generate an SSH key pair without a passphrase on your production server:

ssh-keygen -t rsa -b 2048

Copy the public key to your backup server's authorized_keys file. Then, append this to your backup script:

# Sync to remote CoolVDS storage instance
rsync -avz -e "ssh -p 22" $BACKUP_DIR/ user@backup.coolvds.net:/home/user/backups/

# check exit status
if [ $? -eq 0 ]; then
    echo "Offsite backup successful"
else
    echo "Offsite backup FAILED" | mail -s "Backup Alert" admin@yourdomain.no
fi

Legal Compliance: The Norwegian Context

We operate under the Personal Data Act (Personopplysningsloven). Data jurisdiction matters. If you are storing customer data involving Norwegian citizens, dumping that data onto a cheap FTP server hosted in a non-compliant jurisdiction can land you in hot water with Datatilsynet.

By keeping your primary server and your backup node within trusted European infrastructure, you minimize legal exposure. CoolVDS infrastructure is designed with this in mind, keeping data strictly within agreed jurisdictions. Don't export your liability to a budget host across the Atlantic just to save 50 NOK a month.

Testing: The Step Everyone Skips

A backup is Schrödinger's file: it exists and does not exist simultaneously until you try to restore it. I dedicate one Friday afternoon every month to a "fire drill." I spin up a fresh CoolVDS instance and attempt to restore the application from the previous night's backup.

Here is a snippet for restoration that many forget regarding MySQL. You must drop existing tables before importing to avoid conflicts if you aren't starting from a blank slate:

# Restoration Logic
gunzip < full_dump_2012-10-22.sql.gz | mysql -u root -p db_name

Conclusion

Hardware fails. Software has bugs. Humans make typos. This trifecta of chaos is unavoidable. Your defense is a robust, automated backup strategy that runs without your intervention.

If you are tired of worrying about I/O contention during backups or need a reliable destination for your rsync jobs, it is time to look at your infrastructure. At CoolVDS, we provide the high-performance SSD-backed instances and private networking capabilities that make disaster recovery painless, not an afterthought.

Don't wait for the crash. Secure your data today.