Database Migration Strategies: Moving Data Home to Norway Without Breaking Prod
There are two types of sysadmins: those who have lost data during a migration, and those who are lying. Moving a stateful, write-heavy database is the single most stressful task in our line of work. It’s not just about moving bytes; it’s about maintaining consistency while thousands of transactions hit your disk every second.
Since the Schrems II ruling, I’ve seen a massive influx of CTOs scrambling to pull user data out of US-controlled clouds and onto sovereign Norwegian soil. The legal pressure from Datatilsynet is real, but physics is the bigger adversary here. Moving 500GB of transactional data over a WAN link while keeping latency low enough for your application to stay responsive? That’s where the real work begins.
I’m going to skip the theoretical fluff. We aren't discussing "digital transformation" here. We are discussing rsync, replication logs, and how to switch over DNS without users noticing a thing.
The Hardware Bottleneck: Why Your Target VM Matters
Before you type a single command, look at your destination. The number one reason migrations fail is that the target server cannot ingest data fast enough to catch up with the master's write rate. If your source is a bare metal beast and your destination is a cheap, oversold VPS with spinning rust, your replication lag will effectively be infinite.
Pro Tip: Always verify the I/O capability of your target before starting. At CoolVDS, we enforce KVM virtualization on NVMe storage specifically to prevent "IO wait" from killing migration catch-up phases. You need high random write speeds (IOPS), not just sequential throughput.
Strategy 1: The "Dump and Restore" (Acceptable Downtime)
If you have a maintenance window—say, Sunday at 03:00 CET—and your dataset is under 50GB, don't overengineer it. The complexity of replication adds risk. A simple pipe over SSH is often the most robust method.
However, simple doesn't mean sloppy. Do not write to a file and then SCP it. That wastes I/O. Stream it.
For PostgreSQL 14 (Current Stable):
# On the Destination Server (Pull method is usually safer for firewall rules)
pg_dump -h source_db_ip -U postgres -j 4 -Fd -f /tmp/dump_dir my_database
# -j 4: Uses 4 cores to dump in parallel (Directory format required)
# -Fd: Directory format, necessary for parallel dumps
If you are piping directly to restore to save space:
ssh user@source_ip "pg_dump -U postgres -Fc my_database" | pg_restore -U postgres -d my_database
Critical Check: Ensure your maintenance_work_mem is cranked up on the target server during the restore, or index creation will take forever.
Strategy 2: The Master-Slave Replication (Zero Downtime)
For critical applications where downtime is money, you need to set up the new CoolVDS instance as a replica (slave) of your current provider, let it sync, and then promote it.
Step 1: The Initial Seed
You cannot just turn on replication for a 500GB DB; it will take weeks to pull the binlogs. You need a base backup. For MySQL/MariaDB, Percona XtraBackup is the industry standard because it doesn't lock your database during the backup.
# On Source
xtrabackup --backup --target-dir=/data/backups/ --parallel=4
# Prepare the backup (apply logs to make it consistent)
xtrabackup --prepare --target-dir=/data/backups/
Step 2: Transfer and Configure
Rsync the data to your new host. Once there, configure the persistence. The biggest mistake I see is leaving the default network settings. When replicating over the internet (e.g., Frankfurt to Oslo), latency fluctuates. You need to optimize the MySQL configuration to handle WAN latency.
Edit your /etc/my.cnf on the new CoolVDS node:
[mysqld]
server-id = 2
# Essential for data integrity
binlog_format = ROW
# Crash safety
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
# Replication performance over WAN
slave_compressed_protocol = 1
slave_net_timeout = 60
relay_log_recovery = ON
# Buffer Pool should be 70-80% of RAM on a dedicated DB node
innodb_buffer_pool_size = 12G
Step 3: Start Replication
Point the new node to the old one. Use the binary log coordinates found in the xtrabackup_binlog_info file.
CHANGE MASTER TO
MASTER_HOST='192.0.2.10',
MASTER_USER='repl_user',
MASTER_PASSWORD='SecurePassword!2022',
MASTER_LOG_FILE='mysql-bin.000452',
MASTER_LOG_POS=894322;
START SLAVE;
Monitor the Seconds_Behind_Master. It should drop to 0. If it stays high, your target disk I/O is too slow (again, why we insist on NVMe at CoolVDS) or your network path is congested.
Security: The Tunnel Necessity
Never expose port 3306 or 5432 directly to the public internet, even for a migration. If you don't have a Site-to-Site VPN, use an SSH tunnel. It’s quick and encrypted.
# Establish the tunnel from the new server
ssh -N -L 3307:127.0.0.1:3306 user@source-server-ip -f
Now, point your replication to 127.0.0.1:3307. It’s simple, effective, and compliant with basic security standards.
The Cutover
Once your replication lag is zero and you've verified data integrity (tools like pt-table-checksum are great here), it's time to switch.
- Stop the writes: Put your app in read-only mode or stop the web server.
- Verify sync: Ensure the slave has processed all logs.
- Promote the slave:
STOP SLAVE; RESET MASTER; - Switch DNS/IP: Update your application config to point to the CoolVDS IP.
With a low TTL on your DNS records (set this to 60 seconds a day before the migration), this entire swap takes less than 2 minutes.
Local Context: Why Hosting in Norway Matters
Beyond the technicals, moving your database to a Norwegian datacenter like those used by CoolVDS offers strict adherence to Norwegian privacy laws. Additionally, if your customer base is in Oslo, Bergen, or Trondheim, the latency reduction from migrating away from US or Central European servers is noticeable. We are talking about dropping from 35ms to 2ms. For database queries that happen sequentially, that speed aggregates into a significantly snappier page load.
Final Thoughts
Migrations are risky, but they are manageable with the right architecture. Don't let hardware limit your strategy. If you need a sandbox to test your replication scripts, spin up a high-performance instance with us.
Need raw I/O power for your database cluster? Deploy a CoolVDS NVMe instance in Oslo today and stop waiting on IOPS.