The 3:00 AM Panic: Why Manual Backups Are a Death Sentence
It’s a scenario I wouldn't wish on my worst enemy. It’s 3:00 AM on a Tuesday. Your pager buzzes (or if you're lucky, your Blackberry). The primary RAID array on your database server has degraded, and the controller is flagging a double-drive failure during the rebuild. You check your last manual backup. It’s from three weeks ago.
If this thought makes your stomach turn, good. It should. In the world of systems administration, RAID is not a backup. RAID is redundancy. If you rm -rf / on a RAID 10 array, the mirror faithfully deletes your data on both drives instantly.
Today, we are going to stop relying on luck and start relying on Cron. I will show you how to build a bulletproof, automated backup strategy for your Linux VPS, specifically tailored for the high-compliance environment here in Norway.
The "3-2-1" Rule: Not Just a Suggestion
Before we touch a single line of Bash, memorize this rule. If you violate it, you don't have data; you have temporary files.
- 3 copies of your data: The production data plus two backups.
- 2 different media types: E.g., The server disk and a separate storage array or tape library.
- 1 offsite copy: If the datacenter in Oslo floods or catches fire, your data must exist elsewhere (like a secondary facility in Bergen or Trondheim).
Step 1: The Database Dump (MySQL)
Backing up files is easy. Backing up a live database without corrupting it is the tricky part. If you just copy the raw /var/lib/mysql directory while MySQL is running, you will likely end up with unusable tablespaces.
For most standard deployments (MySQL 5.0/5.1), mysqldump is your workhorse. If you are running InnoDB tables, use the --single-transaction flag to avoid locking your tables and taking your site offline during the dump.
#!/bin/bash
# /root/scripts/backup_db.sh
TIMESTAMP=$(date +"%F")
BACKUP_DIR="/backup/mysql"
MYSQL_USER="root"
MYSQL_PASSWORD="YourStrongPassword"
# Create dir if not exists
mkdir -p $BACKUP_DIR
# Dump all databases
mysqldump -u$MYSQL_USER -p$MYSQL_PASSWORD --all-databases --single-transaction --quick | gzip > $BACKUP_DIR/full_dump_$TIMESTAMP.sql.gz
# Remove backups older than 7 days
find $BACKUP_DIR -type f -name "*.sql.gz" -mtime +7 -exec rm {} \;
Pro Tip: Don't put your password in the script if you can avoid it. Use a .my.cnf file in the root home directory with chmod 600 permissions to store credentials securely.
Step 2: Filesystem Sync with Rsync
Once the database is safe, we need the web files (/var/www or /home). tar is fine for archiving, but rsync is superior because it transfers only the differences between files. This saves massive amounts of bandwidth and time, which is critical when you're paying for transit.
The Local vs. Offsite Dilemma
Storing backups on the same VPS is useless if the filesystem corrupts. You need to push this data to a remote storage server.
#!/bin/bash
# /root/scripts/remote_sync.sh
SOURCE_DIR="/var/www/html"
REMOTE_USER="backupuser"
REMOTE_HOST="backup.coolvds.com" # Your secondary storage
REMOTE_DIR="/home/backupuser/backups/"
rsync -avz -e ssh --delete $SOURCE_DIR $REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR
You’ll need to set up SSH key-based authentication so this script can run without a password prompt. Generate a key pair with ssh-keygen -t rsa and copy the public key to the remote server's authorized_keys file.
Step 3: Automate with Cron
Scripts are useless if you forget to run them. Edit your crontab (crontab -e) to run these jobs during off-peak hours (usually 02:00 - 04:00 AM CET).
# Run DB backup at 2:00 AM
0 2 * * * /bin/bash /root/scripts/backup_db.sh >> /var/log/backup_db.log 2>&1
# Run Offsite Sync at 3:00 AM
0 3 * * * /bin/bash /root/scripts/remote_sync.sh >> /var/log/remote_sync.log 2>&1
Legal Compliance in Norway (Personopplysningsloven)
Here is where the "Pragmatic CTO" needs to step in. If you are hosting data for Norwegian customers, you are bound by the Personal Data Act (Personopplysningsloven) of 2000. Datatilsynet takes this seriously.
Sending backups to a cheap storage bucket in the US might violate safe harbor principles if not handled correctly. The safest approach for sensitive Norwegian data is to keep it within the EEA (European Economic Area).
This is why CoolVDS infrastructure is situated directly in Oslo. When you rsync your backups to our secondary storage arrays, your data never leaves Norwegian soil, keeping your compliance audit trails clean and simple.
Performance Considerations: I/O Wait
Running a heavy gzip compression job can spike your CPU usage and increase I/O wait, causing your website to feel sluggish. Use the nice and ionice commands to lower the priority of your backup scripts.
However, the hardware underneath matters most. On legacy SATA setups, a backup job can bring a server to its knees. This is why at CoolVDS, we utilize high-performance SAS RAID-10 arrays (and experimental SSD caching on select nodes) to ensure that high I/O operations like backups don't impact your web server's latency.
Final Thoughts
Automation is the difference between a minor hiccup and a business-ending catastrophe. Don't wait for a drive failure to test your restore process. A backup is only as good as its last successful restore.
If you need a robust environment to test your disaster recovery scripts, or simply need a backup destination with low latency within Norway, spin up a CoolVDS instance. We provide the stability you need so you can actually sleep through the night.