Zero-Downtime Database Migrations: A Survival Guide for Moving to Norwegian Infrastructure
There is a specific kind of silence that falls over a DevOps team at 3:00 AM when a database migration hangs at 98%. It’s not peaceful. It’s the sound of careers hanging by a thread. I’ve been there. I’ve seen rsync fail because of a flaky network route in Frankfurt, and I’ve seen logical backups take six hours longer than estimated because the target disk I/O was throttled by a "cloud" provider.
In 2021, moving data isn't just about technical execution; it's about legal survival. With the fallout from the Schrems II ruling last year, relying on US-owned hyperscalers is becoming a compliance nightmare for European companies. The Datatilsynet (Norwegian Data Protection Authority) is watching. Moving your data to sovereign Norwegian infrastructure isn't paranoia anymore—it's prudence.
But how do you move a 500GB MySQL production instance from a US region to Oslo without taking the application offline for half a day? You don't use a simple dump-and-restore. You build a bridge.
The Architecture of a Live Migration
The only professional way to migrate a live database is via Master-Slave Replication. We treat the new server (the CoolVDS instance in Oslo) as a replica of your current production server. Once they sync, we promote the replica to master.
Pro Tip: Never trust network latency over public WAN during a migration. If you are moving terabytes, ship a physical drive. If you are moving gigabytes, use a VPN or an SSH tunnel with compression. Raw TCP over the public internet is asking for packet loss.
Phase 1: Assessing the Target
Your destination hardware matters more than your migration script. During the import phase, your database is I/O bound. If you are migrating to a cheap VPS with spinning rust (HDD) or network-attached storage (NAS), your import will crawl. The transaction logs will fill up faster than the disk can write, and you will crash.
This is why we standardized on local NVMe storage at CoolVDS. When we run iostat -x 1 during a heavy import on our KVM nodes, we want to see wait times near zero. If you use shared hosting, your neighbor's WordPress blog usually steals your IOPS.
Phase 2: Preparing the Source (MySQL/MariaDB Example)
First, ensure your source database (the one you are leaving) is configured to write binary logs. Without this, replication is impossible. Check your my.cnf or my.ini:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
max_binlog_size = 100M
After a restart, create a user specifically for the replication stream. Do not use root. Security still applies during maintenance windows.
CREATE USER 'repl_user'@'%' IDENTIFIED BY 'StrongPassword_2021!';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'%';
FLUSH PRIVILEGES;
Phase 3: The Snapshot
We need a consistent snapshot of the data to initialize the replica. If you use mysqldump on a live system without --single-transaction, you will lock your tables. Your users will see 504 errors. Don't be that person.
Run this command (preferably inside a screen or tmux session):
mysqldump -u root -p \
--all-databases \
--single-transaction \
--quick \
--master-data=2 \
| gzip > /tmp/db_dump.sql.gz
The --master-data=2 flag is critical. It writes the binary log coordinates (filename and position) into the dump file header as a comment. We need those coordinates to tell the CoolVDS server exactly where to start syncing.
Phase 4: Transfer and Restore
Move the file to your new Norwegian server. This is where network peering matters. CoolVDS peers directly at NIX (Norwegian Internet Exchange), so if you are coming from elsewhere in Europe, the path is usually optimized.
scp -P 22 /tmp/db_dump.sql.gz user@coolvds-oslo-ip:/tmp/
On the destination (CoolVDS) server, configure the my.cnf with a unique server-id (e.g., 2) and import the data. This is the moment where disk speed reigns supreme. NVMe drives chew through these inserts exponentially faster than standard SSDs.
zcat /tmp/db_dump.sql.gz | mysql -u root -p
Phase 5: The Sync
Once the import finishes, grab the coordinates from the head of the dump file:
zgrep "CHANGE MASTER" /tmp/db_dump.sql.gz | head -n 1
You’ll see something like MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=987654;. Now, configure the slave:
CHANGE MASTER TO
MASTER_HOST='source_server_ip',
MASTER_USER='repl_user',
MASTER_PASSWORD='StrongPassword_2021!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=987654;
START SLAVE;
Check the status with SHOW SLAVE STATUS \G. You are looking for Seconds_Behind_Master: 0. If it says NULL, something is broken. If it’s a high number, it’s catching up.
The Cutover (Schrems II Compliant)
At this point, you have two identical databases. The one in the US is taking writes. The CoolVDS one in Norway is copying them instantly.
- Put the app in maintenance mode. (This is the only downtime required—usually seconds).
- Stop writes to the old database.
- Wait for the replica to catch up (Seconds_Behind_Master = 0).
- Promote the CoolVDS replica to Master (Stop Slave; Reset Slave;).
- Point your app config to the new IP.
Congratulations. You have just moved your critical data jurisdiction to Norway without losing a single transaction.
Why Infrastructure Matters
A migration script is only as good as the underlying metal. I’ve seen replications fail because of "Steal Time" (CPU stealing) on oversold cloud platforms. When the hypervisor gets busy, your replication thread gets paused, and lag accumulates.
At CoolVDS, we use KVM (Kernel-based Virtual Machine) for strict isolation. We don't overcommit RAM, and our storage backend is pure NVMe. For a database, latency is the enemy. Whether it's network latency from routing through too many hops or disk latency from noisy neighbors, it kills performance.
If you are planning a move to comply with European data laws, or just want better latency for your Nordic users, don't let slow I/O compromise your migration.
Ready to test the speed difference? Spin up a CoolVDS instance in Oslo. It takes 55 seconds, and the NVMe throughput speaks for itself.