Console Login

Zero-Downtime Database Migration: A Survival Guide for Norwegian DevOps

Don't Let a Migration Kill Your Weekend (or Your Uptime)

It was 02:00 AM on a Tuesday. We were migrating a 400GB Magento database from a legacy bare-metal server in Oslo to a modern cloud setup. The strategy? A simple mysqldump and restore. Rookie mistake. The restore estimated time was 14 hours. The maintenance window was four. I learned the hard way that night: Disk I/O is the bottleneck of the universe.

If you are relying on logical backups for anything larger than 10GB, you are playing Russian Roulette with your SLA. In the Nordic market, where latency to the NIX (Norwegian Internet Exchange) is measured in single-digit milliseconds, clients don't tolerate downtime. Furthermore, since the Schrems II ruling last year, moving data across borders has become a legal minefield. Keeping your data sovereign in Norway isn't just about speed; it's about not getting fined by Datatilsynet.

This guide isn't for hobbyists. This is for systems engineers who need to move gigabytes of data without dropping connections. We will cover replication-based migration strategies using MySQL 8.0 and PostgreSQL 13, running on high-performance NVMe infrastructure.

The Infrastructure Reality: NVMe or Bust

Before touching a config file, look at your target hardware. Database imports are write-heavy operations that murder spinning disks (HDDs) and even cheap SATA SSDs. High iowait means your CPU is doing nothing but waiting for the disk to catch up.

When we provision instances on CoolVDS, we exclusively use NVMe storage because the random Write IOPS are exponentially higher. A restore that takes 3 hours on a standard SSD can finish in 20 minutes on NVMe. If your hosting provider throttles IOPS, your migration strategy is dead on arrival.

Strategy: The Replication Cutover

Forget offline dumps. The only professional way to migrate a live database is to set up the new server as a read-replica (slave) of the old one (master), let them sync, and then promote the slave.

Scenario A: MySQL 8.0 Migration

We need to establish a replication link. First, ensure your master server has binary logging enabled and a unique server ID.

# On the OLD server (Master) /etc/my.cnf
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
# Essential for data integrity
innodb_flush_log_at_trx_commit = 1 
sync_binlog = 1

Create a replication user. Restrict this strictly to the IP of your new CoolVDS instance for security.

CREATE USER 'repl_user'@'185.x.x.x' IDENTIFIED BY 'ComplexPassword_2021!';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'185.x.x.x';

Now, instead of locking tables to get a dump, use Percona XtraBackup. It allows for hot backups without locking InnoDB tables. This is critical for high-traffic sites.

# On the Master
xtrabackup --backup --target-dir=/data/backups/ --user=root --password=YOURPASS

Transfer this data to your CoolVDS instance using rsync. Since CoolVDS offers unmetered internal traffic within the region, this transfer is fast and free.

rsync -avzP /data/backups/ user@new-server-ip:/data/backups/

On the new server, restore the backup and configure it as a replica. The xtrabackup_binlog_info file will contain the exact log file and position coordinates you need.

# On the NEW server (Slave)
CHANGE MASTER TO
  MASTER_HOST='old-server-ip',
  MASTER_USER='repl_user',
  MASTER_PASSWORD='ComplexPassword_2021!',
  MASTER_LOG_FILE='mysql-bin.000452',
  MASTER_LOG_POS=8943210;

START SLAVE;

Run SHOW SLAVE STATUS\G. Look for Seconds_Behind_Master. It should drop to 0. Once it does, you have an exact real-time copy running on superior hardware.

Scenario B: PostgreSQL 13 Streaming Replication

Postgres handles this elegantly with replication slots, which ensure the master doesn't delete WAL segments that the replica hasn't received yet.

Step 1: Configure the Master (pg_hba.conf)

# Allow replication connections from the new server
host    replication     all             185.x.x.x/32            md5

Step 2: The Base Backup

Run this command from the new CoolVDS instance. It pulls the data directly down the wire.

pg_basebackup -h old-server-ip -D /var/lib/postgresql/13/main -U repl_user -P -v -X stream -C -S coolvds_slot_1

The -C flag automatically creates the replication slot. The -X stream ensures all WAL files are included. This is far more robust than the old method of manual rsync plus archive_command.

The Optimisation Phase (Tuning for Import)

If you must perform a logical dump (e.g., migrating across different major versions), you need to tune the database specifically for the write load, then revert the settings later. Default configs are too safe for bulk loading.

Pro Tip: When importing a massive SQL dump into MySQL, temporarily disable ACID compliance to gain 5x speed. Just remember to re-enable it before going production live!
# TEMPORARY settings for import only
[mysqld]
innodb_flush_log_at_trx_commit = 0
innodb_doublewrite = 0
innodb_write_io_threads = 16
max_allowed_packet = 1G

Don't forget innodb_buffer_pool_size. Set this to 70-80% of your total RAM. On a 32GB RAM CoolVDS instance, allocate 24GB here. If you leave it at the default (often 128MB), your performance will be abysmal.

The