Zero-Downtime Database Migration: A Survival Guide for Norwegian Systems
Migrating a live database is akin to performing open-heart surgery while the patient is running a marathon. One wrong command, one miscalculated buffer setting, and you are looking at data corruption or, perhaps worse, extended downtime that erodes user trust instantly.
I have seen it happen too many times. A development team in Oslo decides to move from a legacy dedicated server to a cloud VPS. They opt for the standard pg_dump and restore method. The transfer takes six hours. During those six hours, the shop is closed. Customers leave. The CTO is furious. This is not how you handle infrastructure in 2023.
If you are operating under strict SLAs or handling sensitive data subject to Datatilsynet audits, you cannot afford the "dump and pray" approach. You need a replication-based strategy that keeps the old and new systems in sync until the final cutover.
The Hidden Bottleneck: Disk I/O
Before we touch a single config file, we must address the hardware. Database replication is I/O intensive. The receiving server must write incoming logs while simultaneously building indices. On a standard budget VPS with shared spinning disks (or throttled SSDs), the disk queue length will skyrocket. Your replication lag will increase rather than decrease, and you will never catch up.
Pro Tip: Never attempt a live migration to a host that doesn't offer NVMe storage with guaranteed IOPS. I typically use CoolVDS for these workloads because their KVM isolation ensures my fsync calls aren't fighting with a neighbor's Bitcoin miner. If `iowait` hits 20%, your migration is already dead.
Strategy: Master-Slave Replication
The only professional way to migrate a large dataset (50GB+) is via replication. You set up the new CoolVDS instance as a replica (slave) of your current production server (master). Once they are perfectly synced, you promote the slave to master and switch your application connection strings. Downtime is reduced from hours to seconds.
Scenario A: MySQL / MariaDB Migration
For MySQL 8.0, we rely on GTID (Global Transaction ID) for a robust topology. First, ensure your current master has binary logging enabled in my.cnf:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
gtid_mode = ON
enforce_gtid_consistency = ON
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
The sync_binlog = 1 is crucial for data safety, though it hits performance. This is why underlying NVMe storage is non-negotiable. Once the master is configured, take a consistent backup using Percona XtraBackup, transfer it to your CoolVDS instance, and configure the replication channel.
On the new CoolVDS instance (the replica), your configuration should optimize for write speed during the catch-up phase:
[mysqld]
server-id = 2
# Temporarily relax durability for faster catch-up ONLY if you can afford a restart on failure
innodb_flush_log_at_trx_commit = 2
innodb_buffer_pool_size = 6G # Assuming an 8GB RAM instance
innodb_log_buffer_size = 64M
innodb_io_capacity = 2000 # Crank this up on NVMe
Execute the change master command:
CHANGE REPLICATION SOURCE TO
SOURCE_HOST='192.168.1.50',
SOURCE_USER='repl_user',
SOURCE_PASSWORD='SecurePassword!23',
SOURCE_AUTO_POSITION=1;
START REPLICA;
Scenario B: PostgreSQL Migration
PostgreSQL makes this slightly more elegant with streaming replication. On your primary server, modify pg_hba.conf to allow the connection from your new CoolVDS IP:
host replication replicator 185.x.x.x/32 md5
Then, use pg_basebackup on the new server to pull the initial data. This command does the heavy lifting without locking the database:
pg_basebackup -h production.server.com -D /var/lib/postgresql/15/main -U replicator -P -v -R -X stream -C -S coolvds_slot
The -R flag automatically generates the standby.signal file and connection settings. Once the command finishes, start the PostgreSQL service. It will immediately begin consuming WAL files from the master.
The Local Context: Latency and Compliance
Why does the physical location of your new server matter? If your users are in Oslo or Bergen, hosting your database in Frankfurt adds roughly 15-20ms of round-trip latency. That sounds negligible until your application runs 50 queries to render a single dashboard. That 20ms becomes 1 full second of waiting.
Furthermore, under Schrems II and strict interpretations of GDPR by the Norwegian Datatilsynet, keeping data sovereignty within the EEA (or ideally, within Norway) simplifies your legal exposure significantly. Migrating to a US-owned hyperscaler often involves complex Transfer Impact Assessments (TIAs).
CoolVDS infrastructure is situated to optimize connectivity within the Nordic region, often peering directly at NIX (Norwegian Internet Exchange). This keeps latency low and data jurisdiction clear.
The Cutover: The Final 60 Seconds
Once your Seconds_Behind_Master (MySQL) is 0 or your pg_stat_replication shows a sync state, you are ready.
- Lower TTL: Reduce DNS TTL to 60 seconds beforehand.
- Maintenance Mode: Put the app in read-only mode (or show a maintenance page).
- Flush: ensure all writes are processed.
- Promote: Run
pg_ctl promote(Postgres) or stop the slave and reset master (MySQL). - Switch: Update app config to point to the new CoolVDS IP.
If you have provisioned the right infrastructure, this swap takes less than a minute. Your users barely notice a hiccup.
Don't let slow I/O or poor network peering sabotage your migration. Database consistency is binary; it works, or it fails. Ensure your foundation is solid.
Ready to test your replication throughput? Deploy a high-performance NVMe instance on CoolVDS in 55 seconds and see the difference raw I/O power makes.