Disaster Recovery in the GDPR Era: Why Your Norway VPS Needs a "Plan B"
It has been six months since GDPR became enforceable across Europe, and the dust still hasn't settled. If you are running infrastructure in Norway or handling EU citizen data, the stakes have shifted. It is no longer just about losing revenue during downtime; it is about Article 32 regarding the "ability to restore the availability and access to personal data in a timely manner."
I recently audited a setup for a mid-sized e-commerce retailer based in Oslo. They were confident. They had RAID 10. They had a "backup script." But when I asked to see their Restore Time Objective (RTO) metrics, the room went silent. We simulated a failure. It took them 14 hours to rebuild the array and restore the database dump. In Q4 2018, 14 hours is an eternity. It is a business-ending event.
Disaster Recovery (DR) isn't about buying more hard drives. It is about architecture. In this guide, we are going to look at a pragmatic, cost-effective DR strategy using standard Linux tools available in Ubuntu 18.04 LTS and CentOS 7, specifically tailored for the high-performance environment of a VPS Norway provider like CoolVDS.
The 2018 Landscape: Latency vs. Sovereignty
For Norwegian businesses, the challenge is physical. You want low latency to your users in Oslo, Bergen, and Trondheim, which means hosting locally. However, relying on a single datacenter is a single point of failure. The pragmatic CTO approach involves a primary node in Norway and a warm standby, perhaps in a geographically separated zone or a nearby European hub like Frankfurt or Amsterdam, depending on your latency tolerance and data sovereignty requirements.
Pro Tip: Be careful with US-owned cloud providers. While Privacy Shield is currently in place, the legal ground is shaking. Hosting with a European provider like CoolVDS ensures your data stays within the EEA, simplifying your compliance audit trails significantly.
Step 1: Database Replication (The Heartbeat)
Static files are easy. Databases are hard. For a robust DR plan, you shouldn't rely solely on nightly mysqldump. You need near real-time replication. If your primary server melts at 14:00, a backup from 03:00 is insufficient.
We will use MySQL 5.7 with GTID (Global Transaction ID) replication. It is far more robust than the old binary log position method and allows for easier failover.
Configuration for the Master (Primary)
On your primary NVMe instance, edit /etc/mysql/mysql.conf.d/mysqld.cnf:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
gtid_mode = ON
enforce_gtid_consistency= ON
log_slave_updates = ON
expire_logs_days = 7
# Optimization for CoolVDS NVMe Storage
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
Note the sync_binlog = 1. On spinning rust (HDD), this kills performance. On NVMe storage, which is standard with CoolVDS, the latency penalty is negligible, guaranteeing that transactions are written to disk before they are acknowledged. This is the difference between "mostly safe" and "ACID compliant."
Configuration for the Slave (Standby)
On your secondary server, the config is similar, but with a unique server-id.
[mysqld]
server-id = 2
relay_log = /var/log/mysql/mysql-relay-bin.log
gtid_mode = ON
enforce_gtid_consistency= ON
read_only = 1
Setting read_only = 1 is crucial. It prevents accidental writes to your DR node, which would break replication consistency.
Step 2: File Synchronization with Restic
While rsync is the old reliable standard, 2018 has seen the rise of Restic. It's fast, secure, and creates deduplicated snapshots. This is vital when you are paying for storage. If you change 10MB of a 100GB file, Restic only sends the changes.
Here is a robust wrapper script to push backups to your secondary storage. This assumes you have initialized a repo on your backup server (SFTP).
#!/bin/bash
# /usr/local/bin/backup-job.sh
export RESTIC_REPOSITORY="sftp:user@backup-node:/srv/restic-repo"
export RESTIC_PASSWORD_FILE="/etc/restic-pass"
# Snapshot /var/www and /etc
restic backup /var/www /etc --tag scheduled
# Prune old snapshots to save space
restic forget --keep-last 10 --keep-daily 7 --keep-weekly 4 --prune
# Check repository integrity
restic check
Running this hourly via cron is lightweight because Restic is efficient. On CoolVDS instances, the high I/O throughput allows Restic to scan millions of files in seconds without causing the "iowait" spike that usually freezes web servers.
Step 3: The Failover Mechanism (Keepalived)
If you have two servers in the same datacenter (different racks) for High Availability, keepalived is the standard for IP failover. It uses the VRRP protocol to float a Virtual IP (VIP) between nodes.
Install it via apt install keepalived and configure /etc/keepalived/keepalived.conf:
vrrp_script chk_nginx {
script "pidof nginx"
interval 2
weight 2
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
# The virtual IP shared between nodes
virtual_ipaddress {
10.0.0.100
}
track_script {
chk_nginx
}
}
If Nginx dies on the Master, the priority drops, and the Backup node takes over the IP 10.0.0.100 instantly. Note that for cross-datacenter failover (e.g., Oslo to Amsterdam), you would rely on DNS updating or a Load Balancer service, as Layer 2 VRRP doesn't stretch across the internet.
The Infrastructure Reality Check
You can script replication all day, but your DR plan is only as good as the underlying virtualization. This is where the "Noisy Neighbor" effect destroys RTO.
In many cheap VPS environments, you are sharing kernel resources in a container (OpenVZ/LXC). If a neighbor gets DDoS'd, your recovery script stalls because the CPU steal time spikes. This is why for production DR, we mandate KVM virtualization.
CoolVDS uses KVM to ensure strict resource isolation. When you need to restore 50GB of data, you get the full I/O throughput of the NVMe interface, not just what's "leftover" by other tenants. This consistency is critical when the CEO is standing at your desk asking when the site will be back up.
Conclusion: Test or Fail
A Disaster Recovery plan that hasn't been tested is just a hope. The beauty of modern virtualization is the ability to spin up a clone, test your restore procedure, and tear it down, costing you pennies.
Your Action Plan for this week:
- Verify your MySQL
binlogconfiguration. - Setup an off-site Restic repo.
- Deploy a small KVM instance on CoolVDS to act as your replication slave.
Don't wait for the hardware failure or the fat-fingered rm -rf command. Build your fortress now.