Zero-Downtime Database Migrations: A Survival Guide for Norwegian DevOps
It is 03:00 AM on a Tuesday. The coffee is cold, the terminal is blinking, and you are staring at a progress bar that hasn't moved in ten minutes. You are migrating a 500GB MySQL dataset from a legacy colocation center in Bergen to a new cloud instance. If this copy fails, you miss the maintenance window. If data corrupts, you explain to the CTO why the shop is offline.
I have been there. We all have. Database migration is the single most stressful operation in systems administration because it relies on the three things that hate each other: data consistency, network latency, and uptime requirements.
In 2018, with GDPR in full force (thanks, Datatilsynet) and customer patience at an all-time low, the old "dump and restore" method is dead for anything larger than a WordPress blog. If your strategy involves mysqldump > dump.sql and a four-hour scp transfer, you are doing it wrong.
This guide details how to execute near-zero downtime migrations using replication and physical backups, specifically tailored for the Norwegian infrastructure landscape where latency to NIX (Norwegian Internet Exchange) matters.
The Bottleneck is Almost Always I/O
Pro Tip: Before you even look at rsync flags, look at your target storage. A database import is an I/O punisher. It hammers the disk with random writes.I recently audited a migration for a logistics firm in Oslo. They were moving from bare metal to a budget VPS provider. The migration failed three times. Why? Their target VPS was running on shared spinning rust (HDDs) with noisy neighbors. The import speed was capping at 20MB/s.
We switched them to a CoolVDS instance backed by NVMe storage. The import speed jumped to 450MB/s. The maintenance window shrank from 6 hours to 45 minutes. When we talk about "performance optimization," in 2018, moving to NVMe is the highest ROI action you can take. If your hosting provider is still selling you "SSD cached" storage for databases, walk away.
Strategy A: The "Hot" Physical Move (MySQL/MariaDB)
Logical backups (SQL text files) are slow to replay. Physical backups (copying the actual data files) are fast but require the server to be stopped—unless you use Percona XtraBackup. This tool allows us to stream a consistent backup of a running database directly to the new server.
The Scenario: Migrating a live MySQL 5.7 server to a CoolVDS instance.
Step 1: The Pipeline
Instead of saving to a file and then copying, we stream the backup over SSH. This saves disk space and time. Run this from the source server:
innobackupex --stream=xbstream --parallel=4 /tmp/ |
ssh user@target-coolvds-ip "xbstream -x -C /var/lib/mysql/"This command uses xbstream to package the data, pipes it over SSH (encryption is mandatory for GDPR compliance), and extracts it immediately on the target.
Step 2: Prepare the Data
Once the stream finishes, the data on the target is inconsistent (it contains uncommitted transactions from the duration of the transfer). We need to apply the transaction logs.
On the target server:
innobackupex --apply-log /var/lib/mysql/
chown -R mysql:mysql /var/lib/mysql/This process is significantly faster on CoolVDS KVM instances because the --apply-log step is heavy on random I/O reads/writes.
Strategy B: Replication for the Final Cutover
For true zero-downtime (or seconds of downtime), we don't just copy data; we clone the server and keep it in sync until we are ready to switch.
1. Configure the Source (Master)
You need binary logging enabled. Edit your /etc/mysql/my.cnf (or my.ini):
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
# Essential for data integrity
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1Restart MySQL. Create a replication user explicitly restricted to your private network or VPN IP to satisfy security audits:
CREATE USER 'repl_user'@'10.8.0.x' IDENTIFIED BY 'StrongPassword2018!';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'10.8.0.x';2. The Dump & Position
We use mysqldump with the --master-data=2 flag. This locks the tables for a split second to record the binary log position, then dumps the data.
mysqldump --all-databases --master-data=2 --single-transaction > dbdump.sqlCheck the head of the file to see the coordinates:
head -n 25 dbdump.sql | grep "CHANGE MASTER"You will see something like: MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98765;
3. Configure the Target (Slave)
Import the dump on your CoolVDS instance. Then, configure the replication link:
CHANGE MASTER TO
MASTER_HOST='source_server_ip',
MASTER_USER='repl_user',
MASTER_PASSWORD='StrongPassword2018!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=98765;Start the slave:
START SLAVE;Now, your new server is a real-time mirror of the old one. You can test your application against the new server using a hosts file modification on your local machine without affecting live users.
The Switchover: Handling the DNS TTL
The technical migration is done, but the traffic migration is where people fail. DNS propagation can take hours. In a Norwegian context, ISPs like Telenor or Altibox usually respect TTL (Time To Live), but you can't rely on it for a database connection string.
The Solution: Use a Floating IP or Load Balancer.
If you don't have a Load Balancer, use an SSH tunnel or a simple TCP proxy like HAProxy as a temporary bridge during the cutover.
- Lower DNS TTL to 60 seconds 24 hours prior.
- Stop the application writing to the Old DB.
- Wait for the New DB (Slave) to catch up (
SHOW SLAVE STATUSshould showSeconds_Behind_Master: 0). - Promote the New DB to Master (
STOP SLAVE; RESET MASTER;). - Point the App to the New DB IP.
PostgreSQL Nuances
If you are running PostgreSQL (9.6 or 10), the logic is similar but the tools differ. We use pg_basebackup for the initial sync. It is robust and handles the streaming automatically.
pg_basebackup -h source_ip -D /var/lib/postgresql/10/main -U repuser -P -v --wal-method=streamFor Postgres, ensuring your wal_level is set to replica (or hot_standby in older versions) is mandatory. Also, ensure your pg_hba.conf allows the connection. I've seen too many migrations halt because the firewall allowed port 5432 but Postgres itself rejected the SSL connection.
Compliance and Geography
We cannot ignore the legal reality of 2018. If you are migrating personal data of Norwegian citizens, GDPR Article 44 restricts transfers outside the EEA. While US Safe Harbor is dead and Privacy Shield is under constant scrutiny, keeping data on Norwegian soil is the safest legal strategy.
Using a provider like CoolVDS with data centers in Norway (or strict EEA adherence) simplifies your Record of Processing Activities (ROPA). You don't need complex Transfer Impact Assessments if the data never leaves the jurisdiction.
Summary
Database migration is 90% preparation and 10% execution. By using replication instead of cold copying, you remove the time pressure of the maintenance window. By using NVMe-based infrastructure like CoolVDS, you eliminate the I/O bottleneck that causes restores to hang.
Don't let slow hardware dictate your uptime. Spin up a CoolVDS instance today, benchmark the disk I/O with fio, and see why our KVM slices are the preferred landing zone for high-performance databases in the Nordics.