Console Login

Zero-Downtime Database Migration: The Norwegian Engineer’s Playbook (2022 Edition)

Zero-Downtime Database Migration: The Norwegian Engineer’s Playbook

I once watched a junior sysadmin try to migrate a 500GB production database using mysqldump during business hours. The I/O lock didn't just stall the checkout process; it melted the CPU steal credits on their shared cloud instance. The site was down for four hours. The company lost thousands.

If you are reading this in January 2022, you likely have a different gun to your head: Schrems II. The legal landscape for hosting data outside the EEA has turned into a minefield. Norwegian CTOs are scrambling to repatriate data from US-owned clouds back to sovereign soil. But moving a live database isn't like moving static assets. You cannot just rsync and pray.

This is a battle-tested guide to migrating active workloads to Norwegian infrastructure with near-zero downtime. We aren't talking about maintenance windows. We are talking about replication.

The Infrastructure Reality Check

Before touching a single config file, look at your target hardware. A database migration is the most I/O-intensive operation you will perform. If your target VPS is throttled on IOPS, your replication lag will never catch up to the master. You will be stuck in an infinite loop of Seconds_Behind_Master > 0.

At CoolVDS, we specifically architected our Oslo nodes with local NVMe storage passed through KVM. We don't do network-attached block storage for databases because the latency penalty kills write-heavy workloads. When you are syncing a 1TB dataset, the difference between 400 MB/s and 2500 MB/s is the difference between a successful migration and a failed project.

Pro Tip: Before starting, run fio on your target instance. If random write IOPS are under 10k, stop. You need better hardware.

Phase 1: The Master Preparation (MySQL 8.0 Example)

We will focus on MySQL 8.0, as it is the current standard for most LEMP stacks in 2022. The strategy is simple: establish the target (CoolVDS) as a replica of your current source.

First, ensure your source database has GTID (Global Transaction Identifier) enabled. Old-school binlog file/position replication is brittle and prone to breakage during failovers. Open your my.cnf (or mysqld.cnf) on the source:

[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
gtid_mode = ON
enforce_gtid_consistency = ON
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1

If you change these, you need a restart. If you cannot restart, you are doing this the hard way with mysqldump --master-data=2, but for the sake of sanity, let's assume you can schedule a 60-second bounce.

Phase 2: The Non-Blocking Snapshot

Do not use mysqldump for datasets larger than 10GB. It is too slow and restoration takes forever. Use Percona XtraBackup. It copies InnoDB pages physically, which is vastly faster and doesn't lock the database for writes.

Install Percona XtraBackup 8.0 on your source server. Then, stream the backup directly to your new CoolVDS instance using socat or netcat. This avoids writing a massive file to disk on the source server, saving I/O.

On the Target (CoolVDS - Oslo):

nc -l -p 9999 | xbstream -x -C /var/lib/mysql/data_temp

On the Source (Current Provider):

xtrabackup --backup --stream=xbstream --parallel=4 --target-dir=./ | nc 192.0.2.10 9999

This pipes the raw data over the network. Since our Oslo datacenter peers directly at NIX (Norwegian Internet Exchange), if you are migrating from another Nordic provider, you might saturate your uplink before your disk. That is a good problem to have.

Phase 3: Prepare and Configure Replica

Once the stream finishes, you need to "prepare" the backup (apply the transaction logs) on the target machine.

xtrabackup --prepare --target-dir=/var/lib/mysql/data_temp

Now, change ownership and move it into place. Ensure your my.cnf on the CoolVDS instance is optimized for the new hardware. Set innodb_buffer_pool_size to 70-80% of your RAM.

chown -R mysql:mysql /var/lib/mysql/data_temp
mv /var/lib/mysql/data_temp/* /var/lib/mysql/
systemctl start mysql

Phase 4: Catching Up

Your target database now has data from the moment the backup finished. But your live site has been accepting writes since then. We need to bridge that gap.

Inspect the xtrabackup_binlog_info file in your data directory. It contains the GTID coordinates.

cat /var/lib/mysql/xtrabackup_binlog_info

Log into your CoolVDS MySQL instance and start replication:

CHANGE REPLICATION SOURCE TO
SOURCE_HOST='old_provider_ip',
SOURCE_USER='repl_user',
SOURCE_PASSWORD='secure_password',
SOURCE_AUTO_POSITION=1;

START REPLICA;

Now, watch the magic:

SHOW REPLICA STATUS\G

Look for Seconds_Behind_Master. It should start high and decrease rapidly. On CoolVDS NVMe instances, we typically see catch-up rates exceeding 5,000 transactions per second, provided the network isn't the bottleneck.

Phase 5: The Switchover

Once Seconds_Behind_Master is 0, both databases are identical. Now you need to switch traffic.

Comparison: The Old Way vs. The Smart Way

MethodDowntimeRisk
DNS Change5-60 mins (TTL propagation)High (Split-brain possible)
Load Balancer / ProxySQL< 1 secondLow
Application Config Swap~10 secondsMedium

For a clean cutover:

  1. Set Source to Read-Only: prevent new writes on the old server.
    SET GLOBAL read_only = ON;
  2. Wait for Sync: Confirm the replica has processed the final transaction.
  3. Promote Target: Stop the replica on CoolVDS.
    STOP REPLICA; RESET REPLICA ALL;
  4. Re-point App: Update your connection strings or HAProxy backend IP to the new CoolVDS IP.

Why Location Matters (Latency & Law)

Beyond the technical steps, remember why you are doing this. If your customers are in Norway, hosting in Frankfurt adds 20-30ms of round-trip latency. Hosting in US-East adds 100ms+. For a PHP application making 10 database queries per page load, that latency compounds linearly if you aren't doing async processing.

Furthermore, Datatilsynet (The Norwegian Data Protection Authority) is becoming increasingly strict about GDPR interpretation. Hosting data on servers physically located in Oslo, owned by a company subject to European jurisdiction, is the safest hedge against legal ambiguity.

Final Thoughts

Database migrations expose the weak points in your stack. If your network is jittery, replication breaks. If your disk I/O is shared with 50 other noisy neighbors, catch-up takes forever.

We built CoolVDS to solve the "noisy neighbor" problem. By using strict KVM isolation and NVMe storage, we ensure that when you need to write 500MB/s to catch up a replica, the throughput is actually there. Don't let infrastructure bottlenecks turn a routine migration into a resume-generating event.

Ready to test your replication speed? Deploy a high-performance NVMe instance in Oslo in under 55 seconds.