Console Login

Disaster Recovery in 2018: Surviving Data Loss and GDPR in Norway

When "rm -rf" Meets Production: Disaster Recovery Strategies for Norwegian Infrastructure

It has been two months since the GDPR (General Data Protection Regulation) officially came into force, and the panic has barely subsided. If you are a Systems Architect operating in Oslo or working with European data, the stakes have shifted. We are no longer just talking about uptime; we are talking about legal survivability. I have seen too many companies treat Disaster Recovery (DR) as a checkbox exercise to satisfy an auditor, only to watch their infrastructure crumble when a junior developer accidentally drops the wrong table or a storage array decides to corrupt a sector. In the Norwegian market, where we pride ourselves on stability and reliance on robust infrastructure like the NIX (Norwegian Internet Exchange), having a shaky DR plan is professional negligence. This guide ignores the fluff and focuses on the raw technical implementation of a recovery strategy that actually works when the alarm bells start ringing at 3 AM.

The RTO/RPO Reality Check

Before we touch a single config file, we must define the metrics that matter. Recovery Time Objective (RTO) is how long you can afford to be down. Recovery Point Objective (RPO) is how much data you can afford to lose. Most VPS providers will sell you "backups," but they rarely define the restore speed. If you are running a high-traffic Magento store or a SaaS application, restoring 200GB of data from a remote FTP server on standard spinning rust (HDD) can take hours. This is where hardware selection becomes your first line of defense. We architect CoolVDS around NVMe storage not just for millisecond latency during normal operations, but because the sequential write speeds during a restoration process are significantly faster than SATA SSDs. If your RTO is under one hour, standard storage solutions are physically incapable of meeting your needs for large datasets.

Pro Tip: Data sovereignty is critical under the new Datatilsynet guidelines. Ensure your DR site is geographically separated from your primary site but remains within the EEA (European Economic Area) to avoid complex transfer impact assessments. A primary node in Oslo with a hot-spare in a distinct Nordic datacenter is the gold standard for compliance.

Strategy 1: Asynchronous Database Replication

Backups are for historical archives; replication is for business continuity. For a MySQL or MariaDB setup (common stacks on Ubuntu 18.04 LTS), relying solely on `mysqldump` is a recipe for high data loss (high RPO). By the time you restore the dump, you have lost hours of transactions. Instead, we implement Master-Slave replication. This allows you to failover to a "Warm Spare" almost instantly. However, standard replication can propagate errors (like that accidental `DROP TABLE`) instantly. To counter this, we use a delayed slave approach or binary log point-in-time recovery.

Here is a battle-tested configuration for `my.cnf` (MariaDB 10.2) to set up a robust master node. Note the `sync_binlog` and `innodb_flush_log_at_trx_commit` settings—these ensure ACID compliance even if the server crashes, which is non-negotiable for financial data.

[mysqld]
server-id              = 1
log_bin                = /var/log/mysql/mysql-bin.log
expire_logs_days       = 14
max_binlog_size        = 100M
binlog_format          = ROW

# SAFETY FIRST
sync_binlog            = 1
innodb_flush_log_at_trx_commit = 1
innodb_buffer_pool_size = 4G # Adjust based on your CoolVDS RAM allocation

# REPLICATION SECURITY
bind-address           = 0.0.0.0
# In production, use iptables/UFW to restrict access to the Slave IP only

On the slave side, you want to ensure the relay log is resilient. If the slave crashes, it must pick up exactly where it left off.

[mysqld]
server-id              = 2
relay-log              = /var/log/mysql/mysql-relay-bin.log
log_bin                = /var/log/mysql/mysql-bin.log
read_only              = 1
relay_log_recovery     = 1

Strategy 2: The Filesystem Sync

Databases are half the battle; user-uploaded content (images, PDFs, logs) is the other. Tools like NFS are notoriously chatty and fragile over WAN links. In 2018, `rsync` remains the king of reliable file synchronization, but it must be wrapped in a robust script to handle timeouts and logging. We don't just run rsync; we monitor it. Below is a script designed to run via cron every 5 minutes. It uses an exclusive lock file to prevent overlapping execution if a sync takes longer than expected—a common issue when network latency spikes.

#!/bin/bash

LOCKFILE="/var/run/rsync_dr.lock"
LOGFILE="/var/log/rsync_dr.log"
SOURCE_DIR="/var/www/html/"
REMOTE_HOST="dr-user@10.0.0.5"
REMOTE_DIR="/var/www/html/"

# Check for lock file to prevent overlap
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
    echo "Rsync job already running" >> ${LOGFILE}
    exit
fi

# Create lock file
echo $$ > ${LOCKFILE}

# Execute Sync
# -a: archive mode
# -v: verbose
# -z: compress file data during the transfer
# --delete: delete extraneous files from dest dirs
rsync -avz --delete -e "ssh -i /root/.ssh/id_rsa_dr" $SOURCE_DIR $REMOTE_HOST:$REMOTE_DIR >> $LOGFILE 2>&1

# Check status
if [ $? -eq 0 ]; then
    echo "[$(date)] Sync Successful" >> $LOGFILE
else
    echo "[$(date)] Sync FAILED" >> $LOGFILE
    # In a real setup, pipe this to mail or a monitoring agent like Nagios
fi

rm -f ${LOCKFILE}

The "CoolVDS" Factor: Virtualization Matters

Software configuration is useless if the underlying virtualization technology lies to you about resources. Many budget providers in Norway still use container-based virtualization (like OpenVZ) where the kernel is shared. In a disaster scenario involving kernel panics or exploits, this lack of isolation is fatal. CoolVDS utilizes KVM (Kernel-based Virtual Machine), providing true hardware virtualization. This means your DR instance behaves exactly like a bare-metal server. You can load custom kernel modules required for specific backup agents or encryption tools (like LUKS) without asking for permission.

Testing the Unthinkable

A DR plan that hasn't been tested is a hallucination. You need to simulate a failure. On CoolVDS, you can snapshot your instance before a drill. Here is a quick checklist for a "Fire Drill":

  • Stop the Web Service: systemctl stop nginx on the primary.
  • Promote the DB Slave: STOP SLAVE; RESET MASTER;
  • Switch DNS: Lower your TTL (Time To Live) to 300 seconds beforehand. Update the A record to the DR IP.
  • Verify Data Integrity: Check the last 5 orders or comments.

To automate the database promotion, you might use a simple script. While tools like MHA (Master High Availability) exist, manual promotion is often safer for smaller teams to avoid "split-brain" scenarios where both servers think they are the master.

#!/bin/bash
# promote_slave.sh

echo "Promoting Slave to Master..."

mysql -u root -p -e "STOP SLAVE;"
mysql -u root -p -e "RESET MASTER;"

# Allow writes
sed -i 's/read_only.*/read_only = 0/' /etc/mysql/my.cnf
systemctl restart mysql

echo "Database is now writable. Update your app config."

Security & Encryption (GDPR Art. 32)

Under GDPR Article 32, you are mandated to encrypt personal data. If you are syncing data off-site, it must travel through an encrypted tunnel. While SSH (used in the rsync example above) is secure, for persistent site-to-site links, an OpenVPN tunnel is the standard industry approach in 2018. Avoid unencrypted FTP at all costs. Furthermore, ensure your backups stored on CoolVDS are encrypted at rest. We support full disk encryption during installation, ensuring that even if physical drives are decommissioned, your data remains unintelligible.

Disaster recovery is not about pessimism; it is about professionalism. The Norwegian digital landscape is robust, but hardware fails, code breaks, and humans make mistakes. By leveraging the low latency and high IOPS of CoolVDS NVMe instances, combined with rigorous replication strategies, you transform a potential catastrophe into a minor service hiccup. Don't wait for the crash to test your backups.

Ready to build a resilient infrastructure? Deploy a KVM-based CoolVDS instance in Oslo today and secure your data against the unexpected.