Console Login

When the Grid Blinks: A Realist's Guide to Disaster Recovery in Norway (2016 Edition)

When the Grid Blinks: A Realist's Guide to Disaster Recovery in Norway

It is 3:14 AM. Your phone illuminates the nightstand. It’s not a text from a friend; it’s a PagerDuty alert. Your primary database node in Oslo just went dark. Packet loss: 100%. SSH: Timeout.

If your stomach just dropped, you don't have a Disaster Recovery (DR) plan. You have a hope and a prayer.

In the wake of the recent Safe Harbor invalidation, moving data back to Europe—specifically Norway—is the only legal move for many of us dealing with sensitive user data. But keeping data local brings its own challenge: you can't just rely on a massive US cloud to magically abstract away hardware failure. You need to own your uptime. As a sysadmin who has watched RAID controllers catch fire (literally) in legacy data centers, I’m going to show you how to architect a failover setup that actually works using tools available today, in 2016.

The "Safe Harbor" Hangover & Local Latency

Let's address the elephant in the server room. The European Court of Justice struck down the Safe Harbor agreement last October. If you are a CTO or Lead Dev in Norway, you are likely under pressure to ensure data stays within EEA borders to satisfy Datatilsynet. Using a US-based host is now a compliance minefield.

This is where CoolVDS enters the architecture diagrams I draw. We aren't just talking about compliance; we are talking about physics. Light speed is finite. If your users are in Oslo or Bergen, hosting in Frankfurt adds 20-30ms of round-trip latency. Hosting in Virginia adds 90ms+. Hosting in Oslo on CoolVDS? Sub-5ms. That snappy feel isn't magic; it's proximity.

The Architecture: "Active-Passive" with Hot Standby

We don't need over-engineered complexity. We need reliability. For a standard LAMP stack (Linux, Apache/Nginx, MySQL, PHP), the most robust DR plan currently available involves an Active-Passive setup with asynchronous replication.

The Goal:
1. Primary (Active): Handles all Writes and Reads.
2. Secondary (Hot Standby): Replicates data in near real-time. Located in a physically separate host node (Anti-Affinity).

Step 1: The Database Layer (MySQL 5.6/MariaDB 10)

Forget clustering for a moment. Galera is great, but it introduces write-latency overhead. For pure DR, standard Master-Slave replication is the battle-tested choice. It allows you to promote the Slave to Master if the primary smokes.

On the Master (Primary VPS):

Edit your /etc/mysql/my.cnf to enable binary logging. This is the bread and butter of replication.

[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = production_db
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1

Pro Tip: Setting sync_binlog = 1 is crucial for durability. It forces a sync to disk on every commit. Yes, it hits I/O performance hard. This is why we use CoolVDS NVMe instances—the high IOPS capability absorbs this penalty, keeping your app fast while ensuring you don't lose transactions during a power cut.

On the Slave (DR VPS):

[mysqld]
server-id = 2
relay_log = /var/log/mysql/mysql-relay-bin.log
read_only = 1

Setting read_only = 1 prevents accidental writes to your backup, which would break replication consistency.

Step 2: File Synchronization

Database is half the battle. What about user uploads? rsync is reliable but slow for real-time needs. In 2016, the smart money is on Lsyncd (Live Syncing Daemon). It watches the filesystem for changes (inotify) and triggers rsync only for the changed files.

Install it on your Master node:

sudo apt-get install lsyncd

Configure /etc/lsyncd/lsyncd.conf.lua:

settings {
    logfile = "/var/log/lsyncd/lsyncd.log",
    statusFile = "/var/log/lsyncd/lsyncd.status"
}

sync {
    default.rsync,
    source = "/var/www/html/uploads",
    target = "root@10.10.0.5:/var/www/html/uploads",
    rsync = {
        compress = true,
        archive = true,
        verbose = true,
        rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
    }
}

Now, every time a user uploads a profile picture, it is instantly mirrored to your CoolVDS DR instance.

Step 3: The Failover Switch (Keepalived)

DNS propagation takes too long. If you change an A-record, some ISPs will cache the old IP for hours. We need IP Failover (VRRP). We use keepalived to share a "Floating IP" between the two servers.

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 101
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass SecretPassword
    }
    virtual_ipaddress {
        192.168.1.100
    }
}

If the Master node stops broadcasting its heartbeat (because the kernel panicked or the switch died), the Slave node detects the silence and instantly claims the 192.168.1.100 IP. Your downtime is measured in seconds, not hours.

Why Infrastructure Choice Matters

You can write the best config in the world, but if the underlying metal is garbage, you will fail. This is why I migrated my critical workloads to CoolVDS.

FeatureBudget VPSCoolVDSWhy it matters for DR
StorageSATA / HDDNVMe SSDRestoring a 50GB database dump on HDD takes 45+ minutes. On NVMe, it takes 4.
VirtualizationOpenVZ (Container)KVM (Full Virt)KVM prevents "noisy neighbors" from stealing your CPU during a recovery crunch.
Network100Mbps Shared1Gbps+ UplinkFaster sync between Master and Slave reduces data loss windows.

The Recovery Drill

A plan you haven't tested is just a suggestion. Next Tuesday, do this:

  1. Log into your CoolVDS control panel.
  2. Reboot your Master instance intentionally.
  3. Watch keepalived logs on the Slave.
  4. Verify your application is still serving traffic via the Slave.

If it works, you have built a fortress. If it breaks, fix it now, while the sun is up and the coffee is fresh. Don't wait for the 3 AM page.

Disaster recovery isn't about pessimism. It's about professionalism. Secure your data in Norway, leverage the speed of NVMe, and sleep through the night knowing your architecture can take a punch.

Ready to harden your stack? Deploy a KVM instance on CoolVDS today and get the low-latency performance your Norwegian users demand.