The Myth of the "Maintenance Window"
If you are still asking clients for an eight-hour maintenance window to migrate a database, you are doing it wrong. In 2024, uptime isn't a metric; it's the baseline for employment. I've spent too many Friday nights staring at a blinking cursor, watching a mysqldump restore crawl at 2MB/s because the target disk IOPS were choked, to let you make the same mistakes.
The reality in the Nordic market is stricter than elsewhere. We aren't just dealing with technical uptime; we are dealing with Datatilsynet (The Norwegian Data Protection Authority). Data sovereignty is a legal requirement for many of us. Moving data from a generic Frankfurt region back to a sovereign VPS Norway solution isn't just about latency—though the 15ms drop is nice—it's about compliance.
Here is the strategy I use to move heavy databases (50GB to 1TB) with near-zero downtime, specifically targeting high-performance KVM environments like CoolVDS.
Phase 1: The Architecture of the Move
Forget dump and restore for anything critical. That method guarantees downtime proportional to your data size. The only professional way to move a live database is Replication.
The concept is simple: Make the new server a slave (replica) of the old server. Let them sync. Once the lag is zero, kill the old connection and promote the new server. Simple in theory, terrifying in practice if you miss a flag.
Pro Tip: Before you even start, check the network latency between your current host and the new target. If you are migrating to CoolVDS in Oslo, run mtr from your source. High latency = high replication lag.
Hardware Matters: The NVMe Factor
I once tried to migrate a high-churn Magento database to a budget VPS provider. The CPU was fine, but the storage was spinning rust disguised as SSD. The replication thread couldn't keep up with the binary logs because the disk write speed capped out. The migration failed. Badly.
This is why I stick to providers explicitly offering NVMe. On a standard CoolVDS instance, you are getting raw I/O throughput that can actually ingest the replication stream faster than the source can generate it. That catch-up phase is critical.
Phase 2: Configuring the Source (The Master)
Let’s assume you are running MySQL 8.0 or Percona Server. You need to enable binary logging and ensure GTID (Global Transaction ID) is on. If you are running with myisam tables in 2024, stop reading and go fix your schema first.
Edit your /etc/mysql/my.cnf (or equivalent) on the SOURCE server:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
# GTID ensures consistency and easier failover
gtid_mode = ON
enforce_gtid_consistency = ON
# Performance tuning for replication
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
A restart is required if these weren't already set. If you can't restart, you are stuck with the dump method. Sorry.
Phase 3: The Snapshot (No Locking)
We need a base dataset to start replication. Do not use mysqldump; it locks tables or causes massive I/O spikes that degrade live performance. Use Percona XtraBackup. It copies InnoDB pages hot, without locking the database.
Run this on the source:
xtrabackup --backup --target-dir=/data/backups/full --user=root --password=YOURPASS --parallel=4
This creates a consistent snapshot while the DB is still taking writes. Now, stream it to your new CoolVDS instance using rsync or netcat. I prefer rsync for the resume capability.
rsync -avzP /data/backups/full root@new-server-ip:/data/backups/
Phase 4: restoring and Linking
On your new CoolVDS instance (let's call it the Replica), prepare the backup and move it into place.
# Prepare the backup (apply transaction logs)
xtrabackup --prepare --target-dir=/data/backups/full
# Stop MySQL, clear old data, move new data
systemctl stop mysql
rm -rf /var/lib/mysql/*
xtrabackup --move-back --target-dir=/data/backups/full
# Fix permissions
chown -R mysql:mysql /var/lib/mysql
systemctl start mysql
Now, grab the binary log coordinates from the xtrabackup_binlog_info file. It will look like this:
mysql-bin.000003 482912 0002-3321-1234-abcd
Configure the replication on the new CoolVDS server:
CHANGE MASTER TO
MASTER_HOST='source_server_ip',
MASTER_USER='replication_user',
MASTER_PASSWORD='complex_password',
MASTER_AUTO_POSITION = 1; -- If using GTID
START SLAVE;
Run SHOW SLAVE STATUS\G. Look for Seconds_Behind_Master. It should start high and decrease. If it stays at 0 immediately, check your connection. If it decreases, grab a coffee. You are syncing.
Phase 5: The Cutover (The Zero-Downtime Trick)
This is where the "Battle-Hardened" part comes in. The scary part. Switching traffic.
- Lower DNS TTL: Set your A record TTL to 60 seconds 24 hours before the move.
- Block Writes on Source: This ensures data consistency. This is the only moment of "downtime" (usually 2-5 seconds).
-- Run on OLD Source
SET GLOBAL super_read_only = ON;
- Verify Sync: Ensure the replica has caught up (Seconds_Behind_Master = 0).
- Promote Replica: Stop the slave thread on the new server.
-- Run on NEW CoolVDS Server
STOP SLAVE;
RESET SLAVE ALL;
-- Ensure it is writable
SET GLOBAL read_only = OFF;
- Switch App Config: Point your application connection strings to the new IP. If you use a floating IP or Load Balancer, update that instead.
PostgreSQL Nuances
For the Postgres crowd (which is most of the serious dev teams in Oslo these days), the logic is identical, but the tools differ.
- Use
pg_basebackupfor the initial stream. - Edit
postgresql.conf: setwal_level = replicaandmax_wal_senders = 10. - Edit
pg_hba.confto allow the replication user.
A specific gotcha with Postgres on virtualized hardware: Ensure your hosting provider exposes the correct CPU flags. CoolVDS passes through the host CPU topology correctly, which matters for Postgres query planner efficiency.
Security and Compliance Notes
Since we are moving data, we must talk about GDPR. If you are migrating strictly within Norway (e.g., from an on-prem server to a managed VPS), you are in the safe zone. However, ensure that your transmission is encrypted.
Always wrap your replication stream in SSL. Both MySQL and Postgres support this natively. Do not replicate over the public internet without SSL or a VPN tunnel. It’s negligent.
# Example: Creating a tunnel if SSL config is too painful
ssh -N -L 3307:127.0.0.1:3306 user@source_ip
Why I Use CoolVDS for This
I don't like surprises. When I run iotop during the restore phase, I want to see the disk writing as fast as the network can deliver packets. I need KVM isolation so that a noisy neighbor doesn't steal my CPU cycles during the checksum verification.
Migrations are stressful enough without fighting the infrastructure. You need a platform that is invisible—it just works, provides the IOPS, and stays out of your way. That’s what professional hosting looks like.
Ready to get your database onto hardware that respects your engineering efforts? Deploy a high-performance NVMe instance on CoolVDS and stop apologizing for downtime.