Disaster Recovery for Norwegian VPS: Beyond the "3-2-1" Cliché
Let’s be honest: nobody cares about backups until rm -rf /var/lib/mysql happens on a Friday afternoon. Or worse, until a datacenter fire—like the one in Strasbourg last year—reminds us that "The Cloud" is just someone else's computer that can burn down. If your disaster recovery (DR) strategy relies solely on your provider's snapshots or a RAID 10 array, you don't have a plan. You have a prayer.
I've spent the last decade cleaning up after "robust" architectures that folded under pressure. In 2022, the game isn't just about data survival; it's about data sovereignty. With the Schrems II ruling and the Norwegian Data Protection Authority (Datatilsynet) tightening the screws on data transfers to US-owned clouds, simply dumping your encrypted tarballs into an AWS S3 bucket is a compliance minefield.
This guide is for the sysadmins running critical infrastructure in Norway who need a DR strategy that is legally compliant, technically bulletproof, and recovers faster than you can brew a cup of coffee.
The Legal Blast Radius: Schrems II and NIX
Before we touch the config files, we have to touch the law. If you are hosting personal data of Norwegian citizens, sending your backups to a US-cloud provider—even one with a data center in Frankfurt—is risky business. The US CLOUD Act allows American agencies to subpoena data stored by US companies, regardless of physical location. This conflicts directly with GDPR.
The only safe harbor is strictly European infrastructure. Using a local provider connected to NIX (Norwegian Internet Exchange) ensures your latency is low (often sub-2ms within Oslo) and your data jurisdiction is clear. This is why we built CoolVDS on purely European soil; it removes the legal headache so you can focus on the technical one.
RTO vs. RPO: The NVMe Factor
Your boss wants zero downtime. You know that's impossible. The conversation needs to shift to two metrics:
- RPO (Recovery Point Objective): How much data are you willing to lose? (e.g., "We can lose the last 5 minutes of transactions.")
- RTO (Recovery Time Objective): How long does it take to get back online?
Here is the brutal truth: RTO is bound by I/O.
I recently audited a setup where the team had 10TB of backups on cheap spinning rust (HDD) storage. When the primary DB corrupted, the restore speed was capped at 120 MB/s. It took nearly 24 hours to restore. On CoolVDS NVMe instances, we sustain sequential read/write speeds vastly higher than that. If your recovery plan doesn't account for disk throughput, you will fail your RTO.
The Architecture: Immutable Backups with Borg
We are going to set up a robust, encrypted, deduplicated backup system using BorgBackup. Why Borg? Because it handles deduplication at the block level (saving massive amounts of space) and supports authenticated encryption.
Step 1: Preparing the Database (PostgreSQL Example)
Dumping the whole database every night is amateur hour for large datasets. It locks tables and spikes I/O. Instead, we use Point-in-Time Recovery (PITR) with WAL archiving.
Edit your postgresql.conf (usually in /etc/postgresql/14/main/ depending on your version) to enable archiving:
wal_level = replica
archive_mode = on
archive_command = 'test ! -f /var/lib/postgresql/wal_archive/%f && cp %p /var/lib/postgresql/wal_archive/%f'
archive_timeout = 60
This pushes every transaction log to a safe directory immediately. If the server crashes, you replay these logs.
Step 2: Borg Initialization
Install Borg (it's in the standard Ubuntu 20.04/22.04 repos):
sudo apt update && sudo apt install borgbackup
Initialize the repository on your remote backup server (which should be in a geographically separate datacenter, ideally another CoolVDS instance in a different zone):
borg init --encryption=repokey user@backup-server.coolvds.net:/mnt/backups/my-app
Pro Tip: Save the passphrase and the key file securely. If you lose these, your data is mathematically unrecoverable. No support ticket can save you.
Step 3: The Backup Script
Don't use cron. Cron is dumb. It doesn't handle overlapping jobs or logging well. Create a script /usr/local/bin/backup.sh:
#!/bin/bash
# Fail on error
set -e
REPOSITORY="user@backup-server.coolvds.net:/mnt/backups/my-app"
# 1. Dump the global cluster data (users, groups)
pg_dumpall --globals-only > /var/lib/postgresql/globals.sql
# 2. Filesystem snapshot (if using LVM) or direct backup
# Here we backup the WAL archive and the base data
borg create --stats --progress --compression lz4 \
$REPOSITORY::'{hostname}-{now:%Y-%m-%d-%H%M}' \
/var/lib/postgresql/wal_archive \
/var/lib/postgresql/globals.sql \
/etc
# 3. Prune old backups (Keep 7 dailies, 4 weeklies, 6 monthlies)
borg prune -v --list $REPOSITORY --keep-daily=7 --keep-weekly=4 --keep-monthly=6
Step 4: Automation with Systemd
Use a systemd timer to run this. It allows you to set `OnFailure` triggers to alert your monitoring system (Zabbix, Nagios, or Prometheus).
/etc/systemd/system/borg-backup.service:
[Unit]
Description=Borg Backup Service
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup.sh
User=root
# Security hardening
PrivateTmp=true
ProtectSystem=strict
ReadWritePaths=/var/lib/postgresql/wal_archive /root/.cache/borg
[Install]
WantedBy=multi-user.target
/etc/systemd/system/borg-backup.timer:
[Unit]
Description=Run Borg Backup every hour
[Timer]
OnCalendar=*-*-* *:00:00
Persistent=true
[Install]
WantedBy=timers.target
Enable it:
systemctl enable --now borg-backup.timer
The "Fire Drill" Protocol
A backup is not a backup until you have successfully restored it. I recommend a quarterly "Fire Drill." Spin up a fresh CoolVDS instance (takes about 55 seconds), install the base OS, and run the restore:
borg list user@backup-server:/path/to/repo
borg extract user@backup-server:/path/to/repo::hostname-2022-10-27-0900
Measure the time it takes. If extracting 50GB takes 2 hours, your disk I/O is the bottleneck. This is where the underlying hardware matters. We use enterprise-grade NVMe in RAID 10 for our host nodes specifically to handle these high-IOPS scenarios without the "noisy neighbor" effect common in budget VPS providers.
Conclusion
Disaster recovery in 2022 requires navigating a minefield of legal compliance and technical debt. By keeping your data within Norwegian jurisdiction and using efficient, encrypted tools like Borg, you satisfy both the lawyers and the engineers.
Don't wait for the inevitable hardware failure or human error. Audit your backup strategy today. If you need a sandbox to test your restore scripts with high-performance storage, deploy a CoolVDS instance and see the difference raw I/O power makes.