Zero-Downtime Database Migration: A Survival Guide for Norwegian Systems
I still wake up in a cold sweat thinking about a migration I botched in 2014. It was a standard Magento deployment moving from a dedicated box in Frankfurt to a virtualized setup in Oslo. I underestimated the I/O wait times on the target node. The result? A 14-hour synchronization window that pushed the client into their peak trading hours. We survived, but just barely.
In 2017, there is absolutely no excuse for hard downtime during database migrations. Whether you are moving to comply with the upcoming GDPR requirements or simply chasing lower latency at NIX (Norwegian Internet Exchange), the strategy remains the same: Replicate, Verify, Switch.
This guide covers the technical reality of moving a live MySQL 5.7 or PostgreSQL 9.6 database to a modern VPS environment, specifically focusing on the infrastructure challenges unique to the Nordic region.
The Hardware Bottleneck: Why Your Target Matters
Before touching a single config file, look at your target storage. A database import is essentially a massive write operation. If you are migrating to a budget VPS host that throttles IOPS, your replication lag will never catch up to the master. You will be stuck in an infinite loop of Seconds_Behind_Master > 0.
At CoolVDS, we standardized on local NVMe storage for this exact reason. When you are writing 50GB of raw InnoDB tables, SATA SSDs often hit a write cliff. NVMe sustains high queue depths without choking.
Pro Tip: Before starting a migration, benchmark your target disk. Use fio to ensure it can handle the write pressure.
# Basic FIO test for random write performance (Safe to run on empty target partition)
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=1 --runtime=60 --group_reporting
Strategy 1: The "Dump and Pipe" (Small DBs < 5GB)
If your database is small, don't overengineer it. The fastest way to move data across the North Sea is a direct pipe over SSH with compression. This avoids writing an intermediate file to the disk.
# On the Source Server
mysqldump -u root -p --opt --single-transaction --quick --routines triggers --events your_db_name | gzip -3 | ssh user@target-coolvds-ip "gunzip | mysql -u root -p your_db_name"
However, for anything mission-critical or larger than 10GB, this is reckless. One network blip and you start over.
Strategy 2: The Master-Slave Replication (The Professional Standard)
This is how we move terabytes without users noticing. We set up the target CoolVDS instance as a Read Replica (Slave) of your current production server (Master). Once they are in sync, we promote the Slave to Master.
Step 1: Configure the Source (Master)
Edit your my.cnf (usually in /etc/mysql/). You need binary logging enabled. If you haven't done this, a restart is required (plan this maintenance window carefully).
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
# For data integrity (ACID compliance)
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
Step 2: Create the Replication User
Security is paramount, especially given the Datatilsynet's increasing focus on data access controls. Restrict this user to the specific IP of your new CoolVDS server.
CREATE USER 'replicator'@'185.x.x.x' IDENTIFIED BY 'StrongPassword_2017!';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'185.x.x.x';
FLUSH PRIVILEGES;
Step 3: Snapshot and Position
We use Percona XtraBackup (open source) because it doesn't lock your tables like mysqldump does. It allows your shop to keep selling while you back up.
# Install Percona XtraBackup on Source
yum install percona-xtrabackup-24
# Create a hot backup
xtrabackup --backup --user=root --password=YOURPASS --target-dir=/data/backups/
This process generates a xtrabackup_binlog_info file. Guard this file with your life. It contains the binary log coordinates you need to start replication.
Step 4: Import and Start Slave
Transfer the backup to your CoolVDS instance using rsync. Once restored, configure the slave:
# On the Target (CoolVDS) MySQL shell
CHANGE MASTER TO
MASTER_HOST='source_server_ip',
MASTER_USER='replicator',
MASTER_PASSWORD='StrongPassword_2017!',
MASTER_LOG_FILE='mysql-bin.000001', -- From xtrabackup_binlog_info
MASTER_LOG_POS= 107; -- From xtrabackup_binlog_info
START SLAVE;
Optimizing the Import Phase
When restoring the backup on the new server, you want raw speed. You can temporarily relax ACID compliance to speed up the InnoDB writes. Just remember to revert these settings before production traffic hits.
Add this to the my.cnf on the target server only during import:
[mysqld]
# DANGER: Only for import phase!
innodb_flush_log_at_trx_commit = 0
innodb_doublewrite = 0
innodb_write_io_threads = 16
innodb_log_buffer_size = 256M
The Cutover: Managing Latency and DNS
Once Seconds_Behind_Master is 0, you are ready.
- Lower TTLs: 24 hours before migration, drop your DNS TTL to 300 seconds.
- Read-Only Mode: Set your old master to read-only.
SET GLOBAL read_only = ON;. This prevents split-brain scenarios where users write to the old DB. - Verify: Ensure the slave has processed the final transactions.
- Promote: Stop the slave process on CoolVDS, remove the master config, and point your app configs to the new IP (or switch the Virtual IP).
Why Norway? Why Now?
With the EU General Data Protection Regulation (GDPR) looming on the horizon for 2018, knowing exactly where your data sits is becoming a legal necessity, not just a technical preference. Hosting outside the EEA or in "cloudy" jurisdictions is becoming risky.
By migrating to CoolVDS instances in Oslo, you secure three things:
- Compliance: Data stays within Norwegian jurisdiction.
- Latency: <2ms ping to major Norwegian ISPs.
- Stability: We run on enterprise hardware that doesn't buckle under I/O pressure.
Migrations are stressful. Your infrastructure shouldn't add to that stress. If you need a staging environment to test your replication scripts, spin up a high-performance instance today.
Don't let slow I/O kill your migration. Deploy a test instance on CoolVDS in 55 seconds.