Console Login

Zero-Downtime Database Migrations: A Survival Guide for Norwegian Infrastructure

The 3 AM Panic and The I/O Bottleneck

If you have been in operations long enough, you know the feeling. It is 03:00 CET. You are staring at a terminal window, watching a progress bar on rsync that hasn't moved in ten minutes. The maintenance window closes in an hour. Your current host's "guaranteed" IOPS have flatlined because a noisy neighbor on their over-provisioned OpenVZ node decided to run a backup script.

I have been there. We have all been there. Database migration is the single most stressful task in systems administration because it is where data integrity meets hard infrastructure limits.

In the Norwegian market, we face a specific set of challenges. We need low latency to Oslo for our local user base, but we also need strict adherence to data sovereignty laws, especially with the recent invalidation of Safe Harbor and the uncertainty surrounding the upcoming EU data protection reforms (what Brussels is calling the GDPR). Hosting your data outside Norwegian borders is becoming a legal minefield. But moving that data back home without downtime? That is an engineering challenge.

This guide is not about "best practices" written by a junior marketer. This is about how to move a 500GB MySQL production dataset from a legacy provider to a high-performance KVM environment like CoolVDS without losing a single transaction.

The Fallacy of "Dump and Restore"

For anything larger than 1GB, mysqldump is dead to me. It locks tables, it destroys the buffer pool, and the restore time is unpredictable. If you are still using this for production migrations, stop.

The only professional way to migrate a live database in 2016 is Asynchronous Replication. You set up the new server (the CoolVDS instance) as a slave to your current production master. You let them sync. You switch the application pointers. You sleep well at night.

Step 1: The Foundation (Hardware Matters)

Before we touch a config file, let's talk about the disk. Database replication is I/O intensive. The Master writes to the binary log; the Slave reads it and executes the SQL locally. If your Slave (the new server) has slower disks than your Master, the Slave will fall behind. This is called Replication Lag.

We use CoolVDS for these targets specifically because of the storage backend. In 2016, most generic VPS providers are still pushing SATA SSDs (or worse, spinning SAS in RAID 10). CoolVDS instances implement NVMe (Non-Volatile Memory Express). The queue depth on NVMe allows us to ingest replication events orders of magnitude faster than standard SSDs.

Pro Tip: Always provision your target server with 20% more RAM than your source. You need that extra headroom for the buffer pool to warm up quickly after the switchover.

Step 2: Non-Blocking Snapshots with Percona XtraBackup

To start replication, we need a baseline snapshot. We cannot afford to lock the production database to get it. Enter Percona XtraBackup. It copies InnoDB data files while the database is running, without locking tables.

Here is how we grab the data from the legacy host without stopping traffic:

xtrabackup --backup --target-dir=/data/backups/ --datadir=/var/lib/mysql/ --user=root --password=YOURPASSWORD --parallel=4

Once the backup is complete, we prepare it (applying the transaction logs to ensure consistency):

xtrabackup --prepare --target-dir=/data/backups/

This creates a consistent snapshot at a specific binary log position. This position is critical. It tells our new CoolVDS server exactly where to start reading updates from the old server.

Step 3: Secure Transmission over the NIX (Norwegian Internet Exchange)

Moving unencrypted SQL data across the public internet is negligence. We need a secure tunnel. Since we are dealing with Norwegian compliance standards (Datatilsynet is watching), we wrap everything in SSH.

We use rsync over SSH to push the prepared data to the new CoolVDS instance. Note the compression flag -z, which helps if your bandwidth is limited, though peering at NIX usually gives us excellent throughput within Norway.

rsync -avzP /data/backups/ root@185.x.x.x:/var/lib/mysql_new/

After the transfer, ensure permissions are correct on the CoolVDS side:

chown -R mysql:mysql /var/lib/mysql_new/

Step 4: Configuring the Replica

On the new server, we need to configure my.cnf to be a replica. This is also where we tune for the NVMe storage. Standard configs are built for spinning disks.

/etc/mysql/my.cnf optimization for 16GB RAM KVM Instance:

[mysqld] server-id=2 innodb_buffer_pool_size=12G innodb_log_file_size=2G # NVMe Specific Optimization innodb_io_capacity=2000 innodb_io_capacity_max=4000 innodb_flush_neighbors=0 # Disable this for SSD/NVMe!

The innodb_flush_neighbors=0 setting is crucial. On spinning disks, the database tries to write adjacent pages to minimize head movement. On NVMe, this seeking optimization is unnecessary overhead and actually slows you down.

Step 5: The Handshake

Start MySQL on the new server. Now, we tell it to look at the old server. Grab the coordinates from the xtrabackup_binlog_info file created earlier.

CHANGE MASTER TO MASTER_HOST='192.168.1.50', # IP of old server (or SSH tunnel endpoint) MASTER_USER='repl_user', MASTER_PASSWORD='SecretPassword', MASTER_LOG_FILE='mysql-bin.000452', MASTER_LOG_POS=982341; START SLAVE;

Run SHOW SLAVE STATUS extf{G}. You want to see Seconds_Behind_Master: 0.

The Switchover Strategy

Once the new CoolVDS instance is caught up, your migration is essentially done. The data is there, live, and syncing in real-time.

  1. Lower TTLs: Set your DNS TTL to 300 seconds 24 hours prior.
  2. Maintenance Mode: Put the app in maintenance mode (stops writes).
  3. Verify Sync: Ensure the replica has processed all events.
  4. Promote: Stop the slave on CoolVDS and make it writable (SET GLOBAL read_only = OFF;).
  5. Redirect: Update your application config to point to the new CoolVDS IP.

Total downtime? Usually less than 60 seconds.

Why Infrastructure Choice Dictates Success

You can execute these commands perfectly, but if your target VPS has