Automating Survival: Why Your Manual Backups Will Fail
I learned the hard way in 2004. I was managing a dedicated server for a high-traffic forum. I had a RAID 1 array, so I thought I was invincible. Then, a controller failure corrupted the file system on both drives simultaneously. No recent off-site backup. I spent three days piecing together data from Google Cache and old hard drives. Never again.
If you are managing a VPS in 2010 and you don't have an automated, off-site backup strategy running via Cron, you are negligent. Hard drives die. File systems rot. And worst of all, human error is inevitable. One rm -rf in the wrong directory, and your RAID array will faithfully mirror that deletion instantly.
The Golden Rule: 3-2-1
The concept is simple, yet often ignored by developers rushing to meet a launch deadline. You need:
- 3 copies of your data.
- 2 different media types (e.g., your live server's disk and a separate backup server).
- 1 copy off-site (geographically separated).
For those of us hosting in Norway, this adds a layer of complexity regarding the Personopplysningsloven (Personal Data Act). You cannot just dump your customer database onto a server in the US without navigating a legal minefield regarding the Safe Harbor framework. Data sovereignty matters.
The Database Dilemma: Locking and Consistency
Files are easy to back up. Databases are the nightmare. If you are running a high-traffic site on MySQL 5.0 or 5.1, simply copying the /var/lib/mysql directory while the server is running is a guarantee for corruption.
Many of you are still using the MyISAM storage engine because it's the default. The problem? MyISAM lacks transactional integrity. To back it up safely, you have to lock the tables, which means downtime for your application.
If you have migrated to InnoDB (which you should, for row-level locking and transactions), you can perform a hot backup without locking the site.
The Script
Here is a battle-tested bash snippet I use on my CoolVDS instances. It detects the date, dumps the database, and compresses the web root.
#!/bin/bash
# Simple Backup Script v1.2
TIMESTAMP=$(date +"%F")
BACKUP_DIR="/backup/$TIMESTAMP"
MYSQL_USER="root"
MYSQL_PASSWORD="YourSecurePassword"
mkdir -p $BACKUP_DIR
# 1. Database Dump
# Use --single-transaction for InnoDB to avoid locking tables
echo "Dumping MySQL..."
mysqldump -u$MYSQL_USER -p$MYSQL_PASSWORD --all-databases --single-transaction | gzip > "$BACKUP_DIR/db_dump.sql.gz"
# 2. File Backup
echo "Archiving Web Files..."
tar -czf "$BACKUP_DIR/www_files.tar.gz" /var/www/html
# 3. Clean up old backups (older than 7 days)
find /backup/* -mtime +7 -exec rm -rf {} \;Pro Tip: Never store the backup on the same partition as your OS. On CoolVDS, we can attach a secondary virtual disk or mount an external storage block. If your root partition fills up, your server crashes. Keep them separate.
Getting It Off-Site: Rsync is King
A local backup saves you if you accidentally delete a file. It does not save you if the datacenter catches fire or if the physical host node suffers a catastrophic hardware failure. You need to push this data away.
FTP is insecure; do not use it. In 2010, the industry standard is rsync over SSH. It’s efficient because it only transfers the deltas (changes), saving bandwidth—a crucial factor if you are paying for international transit.
rsync -avz -e ssh /backup/ [email protected]:/remote/backup/path/This is where latency becomes a factor. If your primary VPS is in Oslo, pushing gigabytes of daily backups to a server in California is slow and eats into your transfer limits. Using a backup target within the NIX (Norwegian Internet Exchange) or at least within Northern Europe ensures your backup window stays short.
The "CoolVDS" Advantage
We built CoolVDS because we were tired of oversold OpenVZ containers where "guaranteed RAM" was a lie. When it comes to backups, the underlying virtualization technology matters.
Because CoolVDS uses Xen HVM and KVM technology, we provide true isolation. This allows you to run your own kernel modules for advanced backup solutions like R1Soft (Idera) or Bacula without hitting permission walls common in shared environments. Furthermore, our storage backends utilize enterprise-grade RAID-10 SAS arrays. While no hardware is infallible, having that level of I/O throughput means your nightly tar jobs won't crush your CPU steal time and slow down your web server.
Final Thoughts
Set up your Cron jobs today. Test your restoration process tomorrow. A backup that hasn't been tested is just a hope, and hope is not a strategy. If you need a sandbox to test your recovery scripts without risking your production environment, spin up a CoolVDS instance. It takes less than a minute, and it might just save your career.