Zero-Downtime Database Migration: A Post-Schrems II Survival Guide
It has been exactly three weeks since the CJEU (Court of Justice of the European Union) dropped the Schrems II ruling, effectively invalidating the Privacy Shield. If you manage data for Norwegian citizens on US-owned cloud infrastructure, your legal standing just evaporated. I've spent the last fortnight fielding calls from CTOs in Oslo who need to repatriate terabytes of user data to Norwegian soil immediately. They all ask the same question: "How do we move 500GB of live data without taking the platform down?"
Most tutorials will tell you to mysqldump, SCP the file, and restore it. That works if you run a knitting blog. It does not work if you process transactions. In a live environment, downtime means lost revenue, and data inconsistency means a corrupted ledger. We are going to look at how to execute a migration from a legacy provider (or a non-compliant US cloud) to a local Norwegian VPS using replication, not simple snapshots.
The Latency Variable: Why Location Matters
Before we touch a config file, understand the physics. If your current database is in us-east-1 and your new target is a VPS in Norway, you are dealing with 90-110ms of latency. For a synchronous replication setup, this will kill your write throughput. The application will hang while waiting for the ACK from across the Atlantic.
The strategy must be asynchronous replication. You establish a slave (replica) on the new host, let it catch up, and only then do you cut over. This reduces downtime from hours (transfer time) to seconds (switchover time).
Phase 1: The Source Configuration (Master)
Whether you use MySQL 8.0 or MariaDB 10.4, the principle remains. You need binary logging enabled. Without this, the slave cannot replay the transactions that occur while you are moving the base snapshot.
Check your my.cnf (usually in /etc/mysql/ or /etc/my.cnf). If you are running on a budget host that restricts access to this file, you are already in trouble. On a proper root-access server like CoolVDS, you control this directly.
[mysqld]
# Must be unique on the network
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
max_binlog_size = 100M
# Performance tweaks for 2020 hardware
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
A restart is required if these weren't already set. If you can't restart the master, you are stuck with the "maintenance window" method.
Pro Tip: If you are migrating MySQL 8.0, remember that the default authentication plugin changed to caching_sha2_password. If your application uses an older PHP driver (pre-7.4), it might fail to connect. When creating your replication user, be explicit about the plugin.
CREATE USER 'replicator'@'%' IDENTIFIED WITH mysql_native_password BY 'StrongPassword!2020';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
Phase 2: The Base Snapshot
We need a coordinate—a point in time to start replication. We use mysqldump with the --master-data=2 flag. This comments the binary log coordinates into the dump file itself, so the slave knows exactly where to start reading.
mysqldump --all-databases \
--master-data=2 \
--single-transaction \
--quick \
--triggers \
--routines \
--events \
-u root -p | gzip > /tmp/dump.sql.gz
The --single-transaction flag is critical. It ensures that the dump is consistent without locking the tables (for InnoDB). If you are still using MyISAM tables in 2020, stop reading and migrate your storage engine first.
Phase 3: Secure Transport and Restoration
Do not transfer this dump over plain FTP or HTTP. You are moving data to comply with GDPR; exposing it during transit defeats the purpose. Use scp or rsync over SSH.
At CoolVDS, we see transfer speeds saturate 1Gbps uplinks easily, but disk I/O is usually the bottleneck on the receiving end during import. This is where NVMe storage becomes non-negotiable. Importing a 50GB SQL dump on a spinning rust HDD takes hours. On NVMe, it takes minutes.
# On the target (CoolVDS) machine
gunzip < dump.sql.gz | mysql -u root -p
Phase 4: Establishing Replication
Once the import is done, inspect the dump file to find the coordinates:
zgrep "CHANGE MASTER" dump.sql.gz | head -1
You will see something like: MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=94523;.
Now, configure the slave. This is your new Norwegian home for the data. Ensure your server-id in my.cnf is different (e.g., 2).
CHANGE MASTER TO
MASTER_HOST='10.0.0.5', -- IP of the old server
MASTER_USER='replicator',
MASTER_PASSWORD='StrongPassword!2020',
MASTER_LOG_FILE='mysql-bin.000004',
MASTER_LOG_POS=94523;
START SLAVE;
Check the status with SHOW SLAVE STATUS \G. You are looking for Seconds_Behind_Master to hit 0. If you see IO errors, check your firewall. We recommend tunneling replication traffic over SSH or a VPN if you aren't on a private VLAN.
Security: Tunneling Replication
If you don't have a private network between your old US host and CoolVDS, do not expose port 3306 to the public internet. Use an SSH tunnel:
ssh -N -L 3307:127.0.0.1:3306 user@old-server-ip -f
Then point your slave to 127.0.0.1 on port 3307.
Phase 5: The Cutover
Your slave is running and synced. It's time to switch. This is the only moment of "downtime," usually lasting 5-10 seconds.
- Stop Writes on Master:
SET GLOBAL read_only = ON; - Wait for Sync: Ensure
Seconds_Behind_Masteris 0 on the slave. - Promote Slave:
STOP SLAVE; RESET SLAVE ALL; - Update App Config: Point your application connection strings to the new CoolVDS IP.
Why Infrastructure Choice Dictates Success
I recently audited a failed migration where the target VPS simply couldn't handle the write load during the catch-up phase. The disk queue length spiked, the slave fell behind, and it never caught up. The culprit was shared standard SSD storage with noisy neighbors.
When migrating databases, you are effectively doubling the write load (importing history + writing new binlogs). This is why we standardize on high-performance NVMe at CoolVDS. You need the IOPS headroom to absorb the sync phase without choking. Furthermore, ensuring your data resides physically in Oslo satisfies the Datatilsynet requirements that are currently keeping legal teams awake at night.
Data sovereignty isn't just a buzzword in 2020; it's a legal requirement. Moving your database shouldn't be a gamble. Test your backups, script your cutover, and ensure your target hardware is up to the task.