Surviving the Migration: Zero-Downtime Strategies for MySQL 5.7 & PostgreSQL 9.6
I have seen grown men cry because of a corrupted ibdata1 file. Migrating a database isn't just about moving bytes from Server A to Server B; it is about maintaining consistency while the world keeps spinning. If you are running a high-traffic Magento store or a SaaS platform targeting Oslo, you cannot afford a "maintenance window" that lasts four hours. Your customers will leave. Your boss will scream.
The standard mysqldump approach is dead for anything over 5GB. It locks tables. It kills I/O. By the time the restore finishes, your data is stale. In late 2016, with the looming GDPR requirements from the EU and Datatilsynet cracking down on data availability, we need smarter tactics.
The Hardware Bottleneck: Why I/O Kills Migrations
Before we touch a single configuration file, let's talk about the metal. Most migration failures I debug aren't logical; they are physical. You are piping gigabytes of data. If your target VPS is sitting on shared spinning rust (HDD) or a crowded SATA SSD with noisy neighbors, your import speed will crawl. You will hit IO wait (%wa in top) of 60%+, and your application will time out.
Pro Tip: Always verify the underlying storage technology. We reference CoolVDS in our architecture designs because they enforce KVM isolation on NVMe arrays. NVMe offers queue depths that SATA simply cannot match, which is critical when you are hammering the disk with write operations during a restore.
Strategy 1: MySQL 5.7 GTID Replication
If you are still using offset-based replication, stop. MySQL 5.7 (stable since late last year) refined Global Transaction Identifiers (GTIDs). This makes failover safer and migration seamless. The goal is to set up the new CoolVDS instance as a slave, let it catch up, and then promote it to master.
1. Configure the Master (Current Host)
You need these settings in your /etc/mysql/my.cnf. If you change these, you need a restart, so plan that once.
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
gtid_mode = ON
enforce_gtid_consistency = true
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
2. The Non-Blocking Snapshot
Do not use mysqldump. Use Percona XtraBackup. It copies InnoDB data without locking the database. It is the only professional way to snapshot a live system.
# Install Percona XtraBackup on Ubuntu 16.04
wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
apt-get update && apt-get install percona-xtrabackup-24
# Create the backup
xtrabackup --backup --target-dir=/data/backups/ --datadir=/var/lib/mysql/
Once the backup is transferred to your new CoolVDS host using rsync (use the flag -avz for compression), prepare it and start the slave.
Strategy 2: PostgreSQL 9.6 Logical Replication
PostgreSQL 9.6 released this September (2016) and it is a massive leap forward. While we used to rely on complex tools like Slony or Bucardo, we now have better native options. However, for a pure migration between 9.5 and 9.6, streaming replication is often the most robust path.
Optimizing for the Import:
When you are restoring the dump on the new server, you must temporarily sacrifice safety for speed. Tweak your postgresql.conf on the target machine only for the duration of the import:
# WARNING: Revert these after import!
fsync = off
synchronous_commit = off
full_page_writes = off
maintenance_work_mem = 2GB
checkpoint_timeout = 30min
max_wal_size = 4GB
This tells Postgres: "Just write the data to memory and flush to disk later. If we crash, we start over." This can reduce restore time by 70%.
The Network Factor: NIX and Latency
If your current host is in Germany or the US, and you are moving to a VPS in Norway, you are battling physics. Latency matters during the sync phase. A ping from Frankfurt to Oslo is roughly 25-30ms. From New York, it's 90ms+.
| Route | Avg Latency | Impact on Sync |
|---|---|---|
| Oslo (NIX) <-> Oslo | < 2ms | Instant |
| Amsterdam <-> Oslo | 18ms | Manageable |
| US East <-> Oslo | 95ms | High Lag |
When you use CoolVDS, you are sitting directly on the Norwegian fiber backbone. This low latency is crucial when the application switches over. If your app server is in Oslo and your DB is in Frankfurt, every query adds a 30ms penalty. Multiply that by 50 queries per page load, and your site feels broken.
Security: The SSH Tunnel Trick
Never expose port 3306 or 5432 to the public internet. Even with a firewall. Just don't. When replicating across networks, tunnel your traffic. It adds encryption overhead, but with modern CPUs (like the Xeons CoolVDS deploys), the impact is negligible compared to the security gain.
# On the new server (The Slave)
ssh -f -N -L 3307:127.0.0.1:3306 user@old-server-ip -i /root/.ssh/id_rsa
# Now point your slave configuration to 127.0.0.1 port 3307
The Final Cutover
1. Lower TTL: 24 hours before migration, lower your DNS TTL to 300 seconds.
2. ReadOnly: Set the old database to read-only mode to prevent data drift.
SET GLOBAL read_only = ON;
3. Wait for Catch-up: Watch `Seconds_Behind_Master` drop to 0.
4. Switch: Update your application config to point to the new CoolVDS IP.
5. Verify: Check logs. Check the Data.
Data migration is stressful. But with the right strategy and hardware that doesn't choke on I/O, it is manageable. Don't let slow I/O kill your SEO or your uptime.
Next Step: Spin up a CoolVDS instance today to benchmark `sysbench` against your current provider. The NVMe difference will speak for itself.