Automated Backups: The Only Insurance Policy That Matters
There is a specific kind of silence that fills a server room—or a Slack channel—when a production database table vanishes. It's not the fan noise. It's the sound of a career hanging in the balance. I experienced this in 2008 handling a large e-commerce portal in Oslo. The RAID controller reported all drives healthy. The filesystem was intact. But a rogue SQL query from a junior developer had wiped the orders table clean.
RAID is redundancy. It is not a backup. If you delete a file, the RAID controller dutifully deletes it from all mirrored drives instantly. If you are relying on RAID-10 SAS arrays to save you from human error or corruption, you are already broken.
In the Nordic hosting market, where reliability is the currency we trade in, having a robust, automated backup strategy is non-negotiable. It separates the professionals from the amateurs who check uptime but never check verify their archives. Here is how to build a bulletproof backup system using tools available today, adhering to the Personopplysningsloven (Personal Data Act).
The 3-2-1 Rule (It Still Applies)
Before touching a single line of Bash, you need architecture. The gold standard remains:
- 3 copies of your data.
- 2 different media types (e.g., live disk and archive storage).
- 1 copy offsite (different physical location).
For a VPS environment, "media types" usually translates to your production filesystem and a separate, dedicated storage server. The offsite copy is critical. If a fire hits the datacenter in Oslo, your backup on the rack next door melts too.
Step 1: Database Consistency is Key
You cannot simply copy /var/lib/mysql while the server is running. You will end up with corrupted MyISAM tables or inconsistent InnoDB spaces. You need a logical dump.
For MySQL 5.0/5.1, the command varies based on your storage engine. If you are using InnoDB (which you should be on CoolVDS for crash recovery), use the --single-transaction flag to avoid locking the tables during the dump.
mysqldump -u root -p'YourSuperSecurePassword' --all-databases --single-transaction --quick | gzip > /backup/db_dump_$(date +%F).sql.gz
Pro Tip: Never put the password directly in the command line if other users have shell access; they can see it inps aux. Use a.my.cnffile in your home directory with restricted 600 permissions.
Step 2: Efficient File Synchronization
Transferring massive tarballs every night eats bandwidth and increases the window for failure. Enter rsync. It uses a delta-transfer algorithm, sending only the differences between the source and destination files. This is crucial if you are pushing backups from a CoolVDS instance in Oslo to a secondary location in Stockholm or London.
Here is the command to sync your web root, preserving permissions, ownership, and symlinks:
rsync -avz --delete -e "ssh -p 22" /var/www/ user@backup.coolvds.internal:/mnt/backups/web/
Understanding the Flags
| Flag | Function |
|---|---|
-a |
Archive mode (preserves permissions, times, owners). |
-v |
Verbose output (good for logs). |
-z |
Compress file data during the transfer. |
--delete |
Delete files in destination that no longer exist in source (Mirroring). |
Step 3: The Master Script
Let's combine this into a script that rotates backups. We don't want to just overwrite yesterday's data; we want a 7-day history. This script creates a dated archive and removes backups older than 7 days.
#!/bin/bash
# /usr/local/bin/daily_backup.sh
# Configuration
BACKUP_ROOT="/backups"
DATE=$(date +%F)
MYSQL_USER="root"
MYSQL_PASS="correct-horse-battery-staple"
# Ensure backup directory exists
mkdir -p $BACKUP_ROOT
echo "[$(date)] Starting Database Dump..."
# Dump all databases
mysqldump -u $MYSQL_USER -p$MYSQL_PASS --all-databases --single-transaction --quick \
| gzip > $BACKUP_ROOT/mysql_full_$DATE.sql.gz
if [ $? -eq 0 ]; then
echo "[$(date)] Database dump successful."
else
echo "[$(date)] Database dump FAILED!"
exit 1
fi
# Compress Configuration and Web files
tar -czf $BACKUP_ROOT/etc_$DATE.tar.gz /etc
tar -czf $BACKUP_ROOT/www_$DATE.tar.gz /var/www
# Cleanup old files (older than 7 days)
find $BACKUP_ROOT -name "*.gz" -type f -mtime +7 -exec rm {} \;
# Offsite Sync (Assuming SSH keys are set up)
rsync -avz --delete $BACKUP_ROOT/ remoteuser@10.0.0.5:/remote/backup/dir/
echo "[$(date)] Backup Routine Complete."
Security and Compliance (Datatilsynet is Watching)
In Norway, the Datatilsynet requires strict control over personal data. If your backup contains customer emails or payment history, leaving it as a plain text SQL dump on a remote FTP server is negligence.
You must encrypt your backups before they leave the server. GPG (Gnu Privacy Guard) is the standard here. Add this line to your script before the rsync step:
gpg --encrypt --recipient "admin@yourcompany.com" $BACKUP_ROOT/mysql_full_$DATE.sql.gz
This ensures that even if the remote backup server is compromised, the data remains unreadable without your private key. This is a vital layer of defense given the increasing sophistication of attacks we've seen this year.
The Infrastructure Factor
Scripting is only half the battle. The underlying hardware dictates the speed and reliability of these operations. Running high-compression gzip tasks or massive rsync jobs generates significant I/O load. On budget hosts using shared spinning disks, this "noisy neighbor" effect can kill your production website's performance during the backup window.
This is where architecture choices matter. At CoolVDS, we utilize KVM virtualization. Unlike container-based solutions (like OpenVZ or Virtuozzo) where the kernel is shared, KVM provides true hardware isolation. Furthermore, our infrastructure uses a separate network interface (eth1) for private traffic.
By routing your backup traffic over the private VLAN to our dedicated backup storage instances, you achieve two things:
- Security: Traffic never touches the public internet.
- Performance: You don't saturate your public bandwidth, ensuring your users in Oslo or Trondheim see no slowdowns.
Automation via Cron
Finally, automate the script. Edit your crontab with crontab -e. Run this at 3:00 AM, typically the lowest traffic window for European audiences.
0 3 * * * /bin/bash /usr/local/bin/daily_backup.sh >> /var/log/daily_backup.log 2>&1
Don't wait for a hardware failure or a hacker to teach you the value of backups. Verify your restores today. If you need a sandbox to test your recovery scripts without risking production, spin up a KVM instance on CoolVDS. It takes less than a minute, and the peace of mind is worth every øre.