Zero-Downtime Database Migration: A Survival Guide for Norwegian Systems
Let’s be honest: moving a production database is the surgical equivalent of a heart transplant performed while the patient is running a marathon. One wrong `DROP TABLE`, one miscalculated buffer pool setting, and you aren't just facing downtime—you're facing a legal nightmare with Datatilsynet.
I've managed infrastructure across the Nordics for over a decade. I've seen "seamless" migrations turn into 14-hour outages because someone underestimated the I/O tax of a restore operation. If you are migrating a dataset larger than 50GB within or into Norway, standard `dump` and `restore` commands are not a strategy; they are a liability.
This guide covers how to execute a database migration in 2025 with minimal risk, focusing on the specific constraints of the Norwegian network topology and high-compliance environments.
The Bottleneck isn't Bandwidth, It's IOPS
Most developers assume the network will be the choking point. In Norway, thanks to the efficiency of the NIX (Norwegian Internet Exchange), bandwidth is rarely the issue. If you are moving data from a legacy provider in Oslo to a modern VPS Norway provider, your bottleneck is almost always disk I/O.
When you import a large SQL dump, your disk is hammered with write operations. If your target host relies on shared spinning rust (HDDs) or throttled SSDs, your import speed drops to near zero.
Pro Tip: Before you even start rsync, benchmark the target disk. If you aren't getting consistent NVMe speeds, abort. On CoolVDS instances, we expose the raw NVMe interface to KVM, bypassing the noisy neighbor tax common in containerized hosting.
Run this on your target server to verify it can handle the write pressure:
fio --name=db_write_test \
--ioengine=libaio --rw=randwrite --bs=4k \
--direct=1 --size=4G --numjobs=4 --runtime=60 \
--group_reporting
If your IOPS (Input/Output Operations Per Second) are below 10,000, your database restoration will take hours, not minutes. NVMe storage is non-negotiable for databases in 2025.
Strategy: The Replication Switchover
For any business-critical application, the only acceptable migration path is Master-Slave replication followed by a promotion.
1. PostgreSQL Configuration (The Origin)
First, ensure your source database allows replication connections. Edit your `pg_hba.conf` to allow the IP of your new CoolVDS server. Security is paramount here—use VPN tunneling or strict firewall rules.
# /etc/postgresql/17/main/pg_hba.conf
# Allow replication from the new CoolVDS server IP
host replication all 185.xxx.xxx.xxx/32 scram-sha-256
Then, verify your `postgresql.conf` allows enough WAL (Write Ahead Log) senders so the new server can catch up without disconnecting.
# /etc/postgresql/17/main/postgresql.conf
wal_level = replica
max_wal_senders = 10
max_replication_slots = 10
hot_standby = on
Reload configuration (do not restart if you can avoid it):
sudo systemctl reload postgresql
2. The Base Backup
On your new CoolVDS server, stop the postgres service and clear the data directory. We will pull the base data directly from the running master.
# Run this on the NEW server (Destination)
# Stop service
sudo systemctl stop postgresql
# Backup existing config if needed, then clear data
sudo -u postgres rm -rf /var/lib/postgresql/17/main/*
# Pull data from Master
sudo -u postgres pg_basebackup -h <MASTER_IP> -D /var/lib/postgresql/17/main/ -U replicator -P -v -R -X stream
The `-R` flag is critical—it automatically writes the `standby.signal` file and connection settings required for the new server to start effectively as a read-only replica.
3. Catch-Up and Cutover
Start the PostgreSQL service on the new server. It will connect to the master and fetch any data changes that occurred during the base backup.
Monitor the lag:
SELECT pid, usename, client_addr, state, sync_state, write_lag, flush_lag, replay_lag
FROM pg_stat_replication;
When `replay_lag` is consistently near 0, you are ready. The switchover process is simple but stressful:
- Stop the Application: Prevent new writes to the old master.
- Verify Sync: Ensure the new server has processed the final WAL segment.
- Promote the New Server: Run
sudo -u postgres pg_ctl promote -D /var/lib/postgresql/17/main/ - Switch DNS/Connection Strings: Point your app to the new CoolVDS IP.
Total downtime: usually under 30 seconds.
MySQL/MariaDB Specifics: GTID is Your Friend
If you are running MySQL 8.0 or 8.4 LTS, rely on GTIDs (Global Transaction Identifiers). Old-school binary log positions (filename + offset) are prone to human error.
Add this to your `my.cnf` on both servers if not already present (requires restart):
[mysqld]
gtid_mode=ON
enforce_gtid_consistency=ON
log_bin=binlog
server_id=1 # (Change to 2 on destination)
When dumping the data for the initial sync, use `mysqldump` with `--single-transaction` and `--set-gtid-purged=ON`. This ensures the new server knows exactly where to pick up replication.
The Compliance Angle: GDPR and Schrems II
In the post-Schrems II world, data sovereignty is not just a buzzword; it's a legal requirement for many Norwegian businesses. Migrating data to US-owned cloud giants, even if the region is "Europe," introduces legal ambiguity regarding the CLOUD Act.
By migrating to a local provider like CoolVDS, you ensure the data resides physically in Oslo, governed exclusively by Norwegian and EEA law. During migration, ensure all transfer tunnels are encrypted. We recommend using WireGuard for the replication link—it has less overhead than OpenVPN and maintains high throughput for large datasets.
Why Infrastructure Matters
You can have the perfect configuration, but if the underlying host steals CPU cycles (noisy neighbors) or throttles disk throughput, the replication lag will never reach zero. This creates a "Zeno's Paradox" of migration where the new server never quite catches up to the live master.
At CoolVDS, we use KVM virtualization with dedicated resource allocation. When you provision 4 vCPUs, they are yours. This stability is required for predictable database performance, especially during the high-load catch-up phase of a migration.
Final Checklist Before You Switch
| Check | Command/Action | Why? |
|---|---|---|
| Time Sync | chronyc sources |
Timestamps must match for logs and consistency. |
| Firewall | ufw status |
Ensure port 5432/3306 is open only to app servers. |
| Memory | Check buffer_pool / shared_buffers |
Configs don't copy automatically. Tune for the new RAM size. |
| Backups | Run a fresh dump immediately after promotion. |
Safety net for the new timeline. |
Database migration is about control. Don't rely on magic "import tools" provided by control panels. Understand the replication stream, secure the transport layer, and host on hardware that respects your I/O needs.
Ready to secure your data sovereignty? Deploy a high-performance NVMe instance on CoolVDS and experience the difference raw metal performance makes.