Database Migration Survival Guide: Zero-Downtime Strategies & The Schrems II Reality
There are two types of sysadmins: those who check their backups, and those who haven't lost data yet. Database migration is the surgery of the infrastructure world. You have a patient on the table (your application), the heart is beating (live traffic), and you need to transplant the organ without stopping the pulse.
I have seen migrations go sideways in spectacular fashion. I once watched a junior dev trigger a full table lock on a 500GB production table during peak hours because they forgot a single flag in mysqldump. The latency spike didn't just slow down the site; it crashed the load balancers.
Today, July 16, 2020, the stakes just got higher. The CJEU has just invalidated the EU-US Privacy Shield (the "Schrems II" ruling). If you are storing Norwegian user data on US-owned clouds, your compliance strategy just evaporated. Migrating data back to European soil, specifically to jurisdictionally safe zones like Norway, is no longer just about latency. It is about survival.
Let's cut through the marketing fluff. Here is how you move your data to a high-performance VPS Norway environment like CoolVDS without dropping connections.
The Pre-Flight Check: Latency & Throughput
Before you even touch ssh, you need to understand the physical constraints. Moving 200GB of data over a standard 100Mbps public link is suicide for replication lag. You need bandwidth, and you need disk speed.
Most budget VPS providers throttle your disk I/O. When you start the import process, your "IOPS" drop to near zero, and the import takes three days. This is why we use NVMe storage at CoolVDS. Spinning rust (HDD) or SATA SSDs choke under the write-heavy load of a restore operation.
Pro Tip: Always run a quick fio benchmark on the destination server before starting. If random write speeds are below 10k IOPS, abort. You need hardware that can digest the data stream faster than the network can send it.
Strategy 1: The "Master-Slave" Promotion (Zero Downtime)
For any serious application, the "Dump and Restore" method (shutting down the app, dumping data, moving it, restoring it) is unacceptable. Downtime costs money. The only professional path is setting up a replication stream.
Step 1: Prepare the Source (Master)
Assuming you are running MySQL 5.7 or 8.0 (the standards in 2020). You need binary logging enabled. Check your my.cnf:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
# Critical for data integrity during replication
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
If you change these, you need a restart. Ideally, these were already set. Next, create a replication user specifically for the migration. Do not use root.
CREATE USER 'replicator'@'%' IDENTIFIED BY 'StrongPassword_2020!';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
Step 2: The Snapshot
Use Percona XtraBackup or mysqldump with the correct flags to get a consistent snapshot without locking tables for hours.
# The "--single-transaction" flag is non-negotiable for InnoDB
# "--master-data=2" records the binary log position in the output file
mysqldump -u root -p \
--all-databases \
--single-transaction \
--quick \
--master-data=2 \
--triggers \
--routines \
--events | gzip > /tmp/full_dump.sql.gz
Step 3: Restore on Destination (CoolVDS)
Transfer the file securely. We recommend rsync over SSH to handle potential network hiccups.
rsync -avzP /tmp/full_dump.sql.gz user@coolvds-target-ip:/tmp/
On the CoolVDS instance (your new Slave), import the data. Because our infrastructure uses KVM with direct NVMe pass-through, this import will likely be 3x-5x faster than on standard cloud instances.
zcat /tmp/full_dump.sql.gz | mysql -u root -p
Step 4: Enable Replication
Grab the log file and position from the head of your dump file:
zcat /tmp/full_dump.sql.gz | head -n 50 | grep "CHANGE MASTER"
Configure the slave mapping:
CHANGE MASTER TO
MASTER_HOST='source_server_ip',
MASTER_USER='replicator',
MASTER_PASSWORD='StrongPassword_2020!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS= 107;
START SLAVE;
The Postgres Path (WAL Streaming)
If you are on PostgreSQL (10, 11, or 12), the logic is similar but the tools differ. You rely on Write-Ahead Logging (WAL). In your postgresql.conf, ensure wal_level = replica.
Modify pg_hba.conf to allow the CoolVDS IP to connect:
# TYPE DATABASE USER ADDRESS METHOD
host replication rep_user 185.x.x.x/32 md5
Then, use pg_basebackup directly from the CoolVDS destination server. This streams the data and sets up the standby configuration in one go, saving massive amounts of time compared to dump/restore.
pg_basebackup -h source_ip -D /var/lib/postgresql/12/main -U rep_user -P -v -R -X stream
The -R flag automatically generates the standby.signal file. Once the catch-up is complete, you simply promote the CoolVDS instance to primary.
The Network Factor: Latency to Oslo
Replication is sensitive. If your Round Trip Time (RTT) fluctuates wildy, you will get replication lag. This is where physical geography matters. If your primary market is Norway or Northern Europe, hosting your database in Frankfurt or London introduces unnecessary milliseconds.
CoolVDS infrastructure is peered directly at NIX (Norwegian Internet Exchange). We see latency as low as 1-2ms within Oslo. This tight latency ensures that when you do the final switchover, the data sync is practically instantaneous.
Securing the Data (GDPR & Schrems II)
We need to talk about compliance. With the Privacy Shield gone as of today, relying on US-based cloud giants is a legal gray area that is quickly turning black. Datatilsynet (The Norwegian Data Protection Authority) is known for strict enforcement.
By migrating your database to CoolVDS, you are moving data to servers physically located in Norway, owned by a Norwegian entity, operating under Norwegian law. You eliminate the third-country transfer risk entirely. For a CTO, this isn't just a technical win; it's a risk management victory.
The Final Cutover
Once replication lag is zero:
- Put your application in "Maintenance Mode" (or Read-Only).
- Ensure the Master and Slave are perfectly synced (
SHOW SLAVE STATUS \G->Seconds_Behind_Master: 0). - Stop the Slave.
- Promote the Slave to Master.
- Point your application config to the new CoolVDS IP.
Total downtime? Usually less than 60 seconds.
Database migration does not have to be a gamble. It requires precise execution, the right tools, and hardware that doesn't fight against you. Don't let slow I/O or legal uncertainty kill your project.
Ready to secure your data sovereignty? Deploy a high-performance NVMe instance on CoolVDS today and experience raw power compliant with Norwegian standards.