Zero-Downtime Database Migration: The 2019 Survival Guide for Norwegian Infrastructure
I still remember the silence in the Slack channel. It was 2016, and a mid-sized e-commerce store based in Trondheim decided to move providers using a simple mysqldump on a 500GB database during peak hours. The transfer stalled. The site was down for six hours. The CTO was not amused.
In 2019, if you are bringing services down for maintenance, you are doing it wrong. Whether you are moving from an outdated on-premise rack to a cloud VPS or switching providers to escape noisy neighbors, migration is a surgical procedure, not a blunt force trauma.
This guide covers the strategies I use to move terabytes of data across the Norwegian wire with minimal interruption, focusing on data integrity, latency, and the raw I/O power needed to execute the switch.
The Latency Equation: Why Geography Matters
Before we touch a config file, let's talk about physics. If your target audience is in Norway, hosting your database in Frankfurt or Amsterdam adds unnecessary milliseconds to every query round-trip. For a heavy application doing N+1 queries, 20ms of latency becomes 2 seconds of page load time.
Data sovereignty is the other half of this coin. With the GDPR almost a year old now, keeping your user data within Norwegian borders satisfies both the Datatilsynet's stringent requirements and your users' trust. We built CoolVDS on this premise: low latency to NIX (Norwegian Internet Exchange) and strict compliance with local data laws.
Strategy A: The "Dump and Pray" (Avoid This)
For databases under 1GB, a dump is fine. Anything larger, and you are gambling with consistency and downtime. The I/O wait times on standard SSDs—or worse, HDDs—during the import phase will kill you.
Strategy B: Replication-Based Migration (The Gold Standard)
The only professional way to migrate a live database is to make your new server a slave (replica) of the old one. Once they are in sync, you promote the slave to master. This reduces downtime to the few seconds it takes to switch your application's connection string.
Step 1: The Initial Sync
Don't lock the master for hours. Use Percona XtraBackup. It allows you to take a hot backup without locking your InnoDB tables. If you are on a CoolVDS instance, our NVMe storage absorbs the I/O impact significantly better than standard SAN storage.
# On the OLD server (Master)
xtrabackup --backup --target-dir=/data/backups/ --datadir=/var/lib/mysql
# Prepare the backup (apply logs)
xtrabackup --prepare --target-dir=/data/backups/
Step 2: Transferring the Data
Use rsync. It's robust, resumable, and standard. Don't use FTP. Just don't.
rsync -avz --progress /data/backups/ user@new-server-ip:/data/backups/
Step 3: Configuring the Replica
On your new CoolVDS server, you need to tell MySQL where to start replicating. The XtraBackup process generates a file called xtrabackup_binlog_info. Read it.
cat /data/backups/xtrabackup_binlog_info
# Output: mysql-bin.000003 48291
Now, configure the slave:
CHANGE MASTER TO
MASTER_HOST='10.0.0.5',
MASTER_USER='repl_user',
MASTER_PASSWORD='SecurePassword123!',
MASTER_LOG_FILE='mysql-bin.000003',
MASTER_LOG_POS=48291;
START SLAVE;
Check the status. You want to see Seconds_Behind_Master: 0.
Pro Tip: If your new server is struggling to catch up during the import, you are likely I/O bound. This is where hardware matters. We use enterprise-grade NVMe drives at CoolVDS specifically to prevent the "replication lag loop of death" where a slave can never catch up to a busy master.
Tuning for the Import
If you must perform a logical import (like a pg_dump for PostgreSQL or mysqldump), the default configurations are too safe. They prioritize durability (ACID) over speed. For the duration of the import only, we can relax these constraints. Just remember to revert them immediately after.
Here is a focused my.cnf snippet for the import phase:
[mysqld]
# Disable disk sync on every transaction.
# DANGEROUS for production, perfect for initial import.
innodb_flush_log_at_trx_commit = 2
# Increase buffer pool size (set to 70% of RAM)
innodb_buffer_pool_size = 4G
# Increase log file size to reduce checkpointing activity
innodb_log_file_size = 512M
# Disable doublewrite buffer (NVMe handles atomic writes well usually, but this adds speed)
innodb_doublewrite = 0
For PostgreSQL users, tuning maintenance_work_mem is your best friend during a restore:
# postgresql.conf optimization for import
fsync = off # Turn back ON immediately after import!
full_page_writes = off
maintenance_work_mem = 1GB
autovacuum = off
The CoolVDS Factor: Why Architecture Wins
You can have the best DBA scripts in the world, but if the underlying host is overselling CPU cycles or choking on I/O, your migration will fail. I've seen providers claiming "SSD" performance that barely hits 300 IOPS because of noisy neighbors.
At CoolVDS, we isolate resources using KVM. When you are writing 500GB of data, you get the full throughput of the NVMe array. We don't throttle you when you need it most. This isn't just about speed; it's about the safety window. The faster the import, the smaller the window for something to go wrong.
Final Checklist Before the Switch
| Check | Why it matters |
|---|---|
| Firewall Rules | Ensure port 3306/5432 is open ONLY to the application server IPs. Use iptables or ufw. |
| User Grants | Did you migrate the mysql.user table? Don't forget permissions. |
| Cron Jobs | Check crontab -l. Database backups or cleanup scripts need to move too. |
| DNS TTL | Lower your DNS TTL to 300 seconds 24 hours before the migration. |
Conclusion
Migration is the ultimate stress test for your infrastructure and your skills. By utilizing replication and relaxing ACID compliance temporarily during imports, you turn a terrifying event into a routine maintenance task.
Don't let slow I/O be the reason your migration fails at 3 AM. If you are planning a move, spin up a high-performance CoolVDS instance and test your throughput. Your future self (and your uptime metrics) will thank you.