Console Login
Home / Blog / Tutorials & Guides / Sleep Soundly: The Sysadmin’s Guide to Automated Backups & Disaster Recovery
Tutorials & Guides 6 views

Sleep Soundly: The Sysadmin’s Guide to Automated Backups & Disaster Recovery

@

The Art of the Automated Safety Net: Scripting Bulletproof Backups

There are two types of system administrators in this world: those who have lost data, and those who are about to. If you are relying on manual tarballs or, heaven forbid, FTPing files to your local workstation whenever you remember to, you are playing Russian Roulette with your infrastructure. In the high-stakes world of hosting—whether you're running a Magento shop or a critical email server—uptime is vanity, but data integrity is sanity.

Today, we aren't just talking about dumping databases. We are talking about building a fully automated, set-and-forget backup architecture that respects the 3-2-1 rule and keeps the Norwegian Data Inspectorate (Datatilsynet) happy.

The 3-2-1 Rule: Non-Negotiable

Before we touch a single line of Bash, let's establish the philosophy. The 3-2-1 rule is the gold standard for a reason:

  • 3 Copies of your data: One live, two backups.
  • 2 Different media types: E.g., The VPS disk and an external storage array.
  • 1 Offsite copy: If the datacenter in Oslo floods, your data must exist in Bergen or Frankfurt.

Step 1: Database Consistency is Key

Simply copying /var/lib/mysql while the server is running is a great way to corrupt your InnoDB tablespaces. You need a consistent dump.

For MySQL 5.0/5.1, use the --single-transaction flag to avoid locking up your tables during the dump. This is crucial for high-traffic sites to avoid the dreaded "Waiting for table lock" status.

#!/bin/bash
# /root/scripts/db_backup.sh
TIMESTAMP=$(date +"%F")
BACKUP_DIR="/backup/mysql"
MYSQL_USER="root"
MYSQL_PASS="YourSecurePassword"

mkdir -p $BACKUP_DIR

# Dump all databases
mysqldump -u$MYSQL_USER -p$MYSQL_PASS --all-databases --single-transaction --quick --lock-tables=false > $BACKUP_DIR/full_dump_$TIMESTAMP.sql

# Compress it to save space
gzip $BACKUP_DIR/full_dump_$TIMESTAMP.sql
Pro Tip: Never store passwords in plain text if you can avoid it. Use a .my.cnf file in your home directory with chmod 600 permissions so mysqldump can run without exposing credentials in the process list.

Step 2: The Power of Rsync

Once your database is dumped and your web files are ready, you need to move them. rsync is the swiss-army knife here. Unlike FTP, it only transfers the deltas (changes), which is vital if you are pushing gigabytes of data over the wire.

Here is a robust command to sync your local backup directory to a remote storage server (perhaps a secondary CoolVDS storage instance):

rsync -avz -e "ssh -p 22" /backup/mysql user@remote.backup.server:/home/user/backups/

Breakdown:

  • -a: Archive mode (preserves permissions, owners, groups).
  • -v: Verbose.
  • -z: Compression (saves bandwidth).
  • -e ssh: Uses the secure SSH protocol for transfer.

Step 3: Automating with Cron

A script that doesn't run automatically is useless. Add this to your root crontab (crontab -e) to run every night at 3:00 AM, when traffic is lowest.

0 3 * * * /bin/bash /root/scripts/db_backup.sh && /usr/bin/rsync -avz ...

Compliance and the "Datatilsynet" Factor

Operating in Norway means adhering to strict privacy norms, specifically the Personal Data Act (Personopplysningsloven). If you are hosting customer data, you are responsible for its integrity. Having an automated backup log proves you are taking "reasonable measures" to secure data.

Furthermore, consider latency. When replicating data, distance matters. Using a backup server within the Nordic region (via NIX - the Norwegian Internet Exchange) ensures your nightly syncs finish in minutes, not hours. Pushing data across the Atlantic to US servers can often result in timeouts due to latency jitter.

The CoolVDS Advantage

We built CoolVDS on enterprise-grade SAS 15k RPM RAID-10 arrays. While our hardware redundancy is top-tier, RAID is not a backup. It protects against disk failure, not human error (like accidental deletion).

For serious professionals, we recommend provisioning a secondary, smaller CoolVDS instance purely for storage. Because our internal network is unmetered, you can rsync terabytes of data between your main web server and your backup node without incurring a single krone in bandwidth overage charges.

Final Thoughts

Don't wait for a disk controller to die or a rogue script to wipe your public_html folder. Set up your Cron jobs today. If you need a sandbox to test your recovery scripts, spin up a CoolVDS instance; our provisioning system gets you root access in under 60 seconds.

/// TAGS

/// RELATED POSTS

The Ironclad Mail Server: Postfix Configuration Guide for RHEL/CentOS 6

Stop relying on shared hosting relays. Learn how to configure a battle-hardened Postfix server on Ce...

Read More →

Bulletproof Postfix: Building an Enterprise Mail Gateway on CentOS 6

Stop trusting shared IPs with your business communications. A battle-hardened guide to configuring P...

Read More →

Stop Guessing: Precision Server Log Analysis with AWStats on Linux

Client-side tracking misses 20% of your traffic. Learn how to configure AWStats for granular server-...

Read More →

Build Your Own Secure Tunnel: A Hardened OpenVPN Guide for 2011

Tired of sniffing risks like Firesheep on public networks? Learn how to deploy a rock-solid OpenVPN ...

Read More →

Tunneling Through the Noise: A Hardened OpenVPN Setup on Debian Squeeze

Public WiFi is compromised. PPTP is dead. Learn how to deploy a battle-ready OpenVPN server with 204...

Read More →

Hardened Postfix Configuration: Building a Bulletproof Mail Server in 2011

Stop losing business emails to spam filters. A battle-hardened guide to configuring Postfix, impleme...

Read More →
← Back to All Posts