The "Enter" Key Anxiety
It’s 03:00 CET. You are staring at a terminal window. The command is typed. Your finger hovers over the Enter key. If this migration fails, the morning traffic hitting your Oslo-based e-commerce platform will be greeted by a 500 error, and the CTO will be greeting you with a resignation form.
Database migration is the single most stressful task in systems administration. It’s where data integrity meets hard infrastructure limits. I have seen migrations go south not because the SQL was wrong, but because the target disk I/O choked on the import, or because the latency between the old provider in Frankfurt and the new one in Norway caused replication lag to spiral out of control.
This isn't a tutorial for moving a WordPress blog. This is for when you have 500GB+ of relational data and you cannot afford more than 60 seconds of downtime. Let’s talk about how to do this properly using replication, consistent snapshots, and high-performance infrastructure like CoolVDS.
The "Dump and Pray" Method (And Why It Fails)
The standard advice—run mysqldump, SCP the file, and restore—is a lie for production environments. It works fine for small datasets. But once your database exceeds 20-30GB, the restore time becomes the bottleneck. Restoring a logical dump is CPU and I/O intensive. You are essentially asking the database to replay millions of INSERT statements.
If you are on standard SATA SSDs (or worse, spinning disks), your write speeds will cap out. The restore that took 20 minutes in staging takes 4 hours in production because of noisy neighbors stealing your IOPS. This is why we enforce NVMe storage on all CoolVDS KVM instances; you need raw write throughput during the restore phase.
Optimizing the Dump
If you absolutely must use a logical dump, stop using the defaults. You need to bypass disk caching where possible and compress the stream to save network bandwidth.
# The "I know what I'm doing" mysqldump command
mysqldump --single-transaction \
--quick \
--compress \
--hex-blob \
--order-by-primary \
--net_buffer_length=16384 \
-u root -p production_db | pv | gzip > prod_dump_2022_03_30.sql.gz
Using pv (Pipe Viewer) is non-negotiable here. It gives you a progress bar. Staring at a blinking cursor for an hour is bad for your blood pressure.
The Real Strategy: Master-Slave Replication
The only way to migrate a large database with near-zero downtime is to make your new server a slave (replica) of your current production server. You sync the bulk of the data while the old server is still live, let the new server catch up via the binary log, and then switch over in seconds.
1. The Configuration
First, ensure your source database (Master) has binary logging enabled and a unique server ID. Edit your my.cnf (or mysqld.cnf on Debian/Ubuntu):
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
gtid_mode = ON
enforce_gtid_consistency = ON
expire_logs_days = 7
Using GTID (Global Transaction ID) is crucial in 2022. It makes failover and tracking replication coordinates significantly safer than the old "filename + position" method.
2. The Snapshot
Use Percona XtraBackup for MySQL. It takes a physical backup (copying the actual data files) rather than a logical backup. It is much faster to restore. If you are using PostgreSQL, use pg_basebackup.
# PostgreSQL streaming replication setup example
pg_basebackup -h source_ip -D /var/lib/postgresql/14/main -U replicator -P -v -X stream
3. The Catch-Up
Once the data is on the new CoolVDS instance, configure it to follow the master. Since CoolVDS servers are located in optimal proximity to major European exchange points, replication lag due to network latency is minimized.
-- On the NEW server (The Slave)
CHANGE MASTER TO
MASTER_HOST='192.0.2.10', -- Old Server IP
MASTER_USER='replicator',
MASTER_PASSWORD='SecurePassword!2022',
MASTER_AUTO_POSITION=1;
START SLAVE;
Run SHOW SLAVE STATUS\G. Look for Seconds_Behind_Master. It should drop to 0.
Infrastructure Matters: The Nordic Context
Here is where geography bites you. If your current server is in a budget data center in the US and you are migrating to Norway to comply with GDPR or Schrems II, you are fighting the speed of light. Latency impacts replication. If your write rate on the master exceeds the rate at which data can traverse the Atlantic, the slave will never catch up.
We see this often. A customer tries to migrate a high-churn database to our Oslo facility. The link is stable, but the throughput isn't enough.
Pro Tip: If migrating across continents, set up an intermediate relay server or use a compressed SSH tunnel for the replication stream to reduce the bandwidth overhead.
Checking for Bottlenecks
Before you switch DNS, stress test the new environment. Use iostat to ensure your disk wait times are acceptable.
# Check Disk I/O every 1 second
iostat -xz 1
If %iowait is consistently above 5-10%, your storage is too slow. On CoolVDS, we utilize NVMe arrays specifically to keep this number near zero, even under heavy random write loads typical of databases.
The Cutover
When you are ready:
- Set the old application to Read-Only mode (or stop the web server).
- Wait for
Seconds_Behind_Masteron the CoolVDS instance to hit 0. - Stop the Slave:
STOP SLAVE; - Promote the new DB to Master (Reset Slave).
- Point your application config to the new IP.
Total downtime? About 30 seconds. No data loss. No corrupt tables.
Summary
Database migration is 20% SQL knowledge and 80% infrastructure planning. You need predictable I/O, low network latency, and a solid rollback plan. Don't let slow I/O kill your SEO or your uptime statistics. If you need a staging ground to test your replication scripts, deploy a test instance on CoolVDS. Our NVMe storage eats database imports for breakfast.