Console Login

Disaster Recovery in the Post-Schrems II Era: A CTO's Survival Guide

When "The Cloud" Evaporates: Architecting True Resilience in 2021

March 2021 served as a brutal wake-up call for the European hosting industry. When a major datacenter in Strasbourg caught fire, millions of websites didn't just go offline—they ceased to exist. For many CTOs, that morning wasn't about downtime; it was about explaining to the board why the "cloud backup" was physically located in the same burning building as the production server.

If you are hosting critical infrastructure in Norway or the broader EEA, the landscape has shifted. Between the physical fragility exposed earlier this year and the legal bombshell of the Schrems II ruling effectively killing the Privacy Shield, your Disaster Recovery (DR) plan is no longer just a technical document. It is a legal shield.

Let's stop pretending that a nightly tarball sent to an AWS S3 bucket is a strategy. It's a liability. Here is how we architect for survival, keeping performance high and the Datatilsynet (Norwegian Data Protection Authority) happy.

The New "3-2-1" Rule: Sovereignty Matters

The traditional 3-2-1 backup rule (3 copies, 2 media types, 1 offsite) is insufficient if your "offsite" falls under US jurisdiction. Post-Schrems II, transferring personal data of Norwegian citizens to US-controlled providers (even if the servers are in Frankfurt) creates a compliance headache regarding the US CLOUD Act.

Pro Tip: Data Sovereignty is the new latency. Hosting on CoolVDS ensures your data remains under Norwegian/EEA jurisdiction, simplifying GDPR compliance significantly compared to navigating the murky waters of hyperscalers.

Technical Execution: The Immutable Backup Pipeline

Ransomware is smarter in 2021. It hunts for your backups before it encrypts your disk. We need immutable backups. For this, I rely heavily on BorgBackup combined with append-only restrictions on the receiving end. It deduplicates, it encrypts, and it's fast.

1. Setting up the Secure Vault

On your backup destination (e.g., a high-storage CoolVDS instance in a geographically separate datacenter), strictly limit the SSH keys. The production server should be able to push data but never delete it.

Add this to your /root/.ssh/authorized_keys on the backup server to restrict the key to Borg commands only:

command="borg serve --restrict-to-path /var/backups/repo",restrict ssh-rsa AAAAB3Nza... production-server-key

2. The Client-Side Script

Don't use complex bash loops. Use a robust wrapper. Here is a production-grade snippet for your cron jobs (or Systemd timers, if you are civilized).

#!/bin/bash
# /usr/local/bin/backup-routine.sh

export BORG_PASSPHRASE='Correct-Horse-Battery-Staple-2021'
REPOSITORY="borg@backup.coolvds.no:repo"

# Log the start time
logger "Starting Backup Routine to CoolVDS Vault"

# Create the backup archive
borg create                         \
    --verbose                       \
    --filter AME                    \
    --list                          \
    --stats                         \
    --show-rc                       \
    --compression lz4               \
    --exclude-caches                \
    $REPOSITORY::'{hostname}-{now:%Y-%m-%d_%H:%M}' \
    /etc                            \
    /var/www                        \
    /opt/docker-compose

# Prune old backups - Keep 7 daily, 4 weekly, 6 monthly
borg prune                          \
    --list                          \
    --prefix '{hostname}-'          \
    --show-rc                       \
    --keep-daily    7               \
    --keep-weekly   4               \
    --keep-monthly  6               \
    $REPOSITORY

Note the use of lz4 compression. On CoolVDS NVMe instances, CPU is rarely the bottleneck—network throughput usually is. LZ4 gives us the best balance of speed vs. compression ratio, ensuring we don't lock up I/O during business hours.

Database Consistency: Don't Just Copy Files

I still see developers using cp -r /var/lib/mysql while the service is running. This guarantees corrupted tables. For MySQL/MariaDB 10.5+, you need consistent snapshots.

If you are running a high-transaction application (like Magento or a custom Laravel app), mysqldump locks tables and kills performance. Use Percona XtraBackup. It performs hot backups without locking your database.

xtrabackup --backup --target-dir=/data/backups/mysql/ --datadir=/var/lib/mysql

For recovery, remember to prepare the data before moving it back:

xtrabackup --prepare --target-dir=/data/backups/mysql/

If you are utilizing CoolVDS, you can leverage KVM-level snapshots for a full system state capture. However, for granular recovery (restoring just one table), XtraBackup is mandatory.

Network Resilience: The "NIX" Factor

Disaster isn't always data loss; sometimes it's connectivity loss. Hosting in Norway means dealing with specific routing paths. CoolVDS peers directly at NIX (Norwegian Internet Exchange) in Oslo. This reduces hops for local traffic.

For your DR site, choose a location that doesn't share the same power grid or fiber entry point. If your primary node is in Oslo, your secondary should technically be in Bergen or Trondheim, or at least a distinct availability zone. When configuring your failover IP logic, ensure your TTL (Time To Live) on DNS records is low (under 300 seconds) to facilitate rapid switching.

Metric Standard VPS CoolVDS Architecture
Storage Backend SATA / Hybrid SSD Pure NVMe (High IOPS for rapid restore)
Snapshots Daily (Scheduled) Instant COW (Copy-on-Write)
Jurisdiction Often unclear (US CLOUD Act risk) Strictly Norway/EEA

Automating the "Fire Drill"

A backup you haven't restored is just a hopeful file. Automate the verification. I use a simple systemd timer that runs every Sunday, spins up a fresh Docker container, imports the latest database dump, and runs a simple query: SELECT count(*) FROM users;. If it returns 0 or fails, PagerDuty wakes me up.

Here is the Systemd unit file /etc/systemd/system/dr-test.timer:

[Unit]
Description=Run Weekly Disaster Recovery Test

[Timer]
OnCalendar=Sun *-*-* 03:00:00
Persistent=true

[Install]
WantedBy=timers.target

The Verdict

Hardware fails. Fiber gets cut. Datacenters—rarely, but possibly—burn. As architects, we cannot prevent these events, but we can design systems that treat them as minor inconveniences rather than company-ending catastrophes.

By leveraging tools like BorgBackup for immutable history and hosting on infrastructure like CoolVDS that respects data sovereignty and provides the NVMe throughput required for rapid RTO (Recovery Time Objective), you aren't just buying servers. You are buying sleep.

Don't wait for the next outage to test your strategy. Spin up a secondary recovery node on CoolVDS today and see if you can restore your production environment in under 15 minutes.