Database Migration Without the Heart Attack: A Battle-Hardened Guide for Nordic Systems
There are two types of sysadmins: those who have lost data during a migration, and those who are lying. Moving a live database is controlled chaos. You are essentially performing open-heart surgery on a runner who refuses to stop sprinting. I've spent the last decade migrating monolithic stacks across Europe, and if there is one thing I've learned, it's that hope is not a strategy.
In 2022, the landscape has shifted. We aren't just battling latency anymore; we are battling legislation. With the fallout from Schrems II still settling, I'm seeing a massive influx of Norwegian CTOs scrambling to pull data out of US-owned cloud regions (yes, even the ones in Frankfurt) and onto sovereign Norwegian soil. They are terrified of Datatilsynet, and they should be.
If you are moving a critical workload to a VPS in Norway to cut latency to Oslo or satisfy GDPR requirements, you cannot afford downtime. Here is how we do it without melting the servers.
The Latency Trap and The IOPS Bottleneck
Before we touch a single config file, understand the physics. If you are migrating from a hyperscaler to a local provider like CoolVDS, your biggest enemy during the import phase is Disk I/O. I once watched a 500GB MySQL import take 14 hours because the target machine was on standard SSDs rather than NVMe. The write-lock killed the application.
Pro Tip: Always verify the disk scheduler on your target VPS before starting an import. For NVMe drives, you wantnoneorkyber. If you seecfq, you're leaving performance on the table.
# Check your scheduler
cat /sys/block/vda/queue/scheduler
[none] mq-deadline kyber
Strategy: The Replication Bridge (Zero-Downtime)
The "Dump and Restore" method (mysqldump > pipe > mysql) is dead for anything over 5GB. It requires too much downtime. The only professional way to migrate in 2022 is establishing a temporary Master-Slave replication setup.
1. Preparing the Source (The Master)
Let's assume you are running MariaDB 10.5 or MySQL 8.0. You need binary logging enabled. If you are running a legacy stack without this, you will need a restart. Check your my.cnf (or mysqld.cnf):
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
max_binlog_size = 100M
# Essential for data integrity on crash
innodb_flush_log_at_trx_commit = 1
Create a replication user. Do not use root. And since this data is traversing the public internet to reach your CoolVDS instance in Norway, you must enforce SSL. I don't care if you think your IP whitelist is secure; plaintext database traffic is negligence.
CREATE USER 'repl_user'@'%' IDENTIFIED BY 'ComplexPassword33!';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'%' REQUIRE SSL;
2. The Initial Sync
We use Percona's XtraBackup for this because it doesn't lock the database like `mysqldump` does. If you are on Postgres, use `pg_basebackup`.
On the source server:
xtrabackup --backup --target-dir=/data/backups/ --user=root --password=YOURPASS
Transfer this to your CoolVDS instance using rsync. We use -avz for compression, which is vital if you are paying for egress bandwidth on your current host.
rsync -avz -e ssh /data/backups/ user@target-coolvds-ip:/data/backups/
3. Configuring the Target (The Slave)
On your new NVMe-powered instance, restore the backup and configure it to follow the master. This is where the magic happens. The target server catches up with all the writes that happened while you were transferring the files.
# Inside the MySQL shell on the Target
CHANGE MASTER TO
MASTER_HOST='source-ip',
MASTER_USER='repl_user',
MASTER_PASSWORD='ComplexPassword33!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS= 12345,
MASTER_SSL=1;
START SLAVE;
Monitor the status with SHOW SLAVE STATUS \G. You want to see Seconds_Behind_Master: 0. Once that hits zero, your data is live in Norway.
The Postgres Approach: Logical Replication
If you are running PostgreSQL 13 or 14, Logical Replication is superior to streaming replication for migrations because it allows you to migrate between major versions with minimal hassle.
Source (Publisher):
# postgresql.conf
wal_level = logical
# psql
CREATE PUBLICATION my_migration FOR ALL TABLES;
Target (Subscriber on CoolVDS):
CREATE SUBSCRIPTION my_migration_sub
CONNECTION 'host=source-ip port=5432 dbname=mydb user=rep_user password=secret'
PUBLICATION my_migration;
This method replicates individual table changes rather than raw disk blocks. It's incredibly robust.
The Cutover: The "ReadOnly" Moment
This is the only moment the application pauses. You need to stop writes on the old server to ensure data consistency.
- Set Source to Read-Only:
SET GLOBAL read_only = ON; - Wait for Sync: Watch the slave status on the CoolVDS instance until it executes the final transaction.
- Promote Target: Stop the slave (
STOP SLAVE;) and point your application DNS/Config to the new IP.
If you scripted this correctly, the downtime is less than 30 seconds. I've done this for e-commerce sites during lunch hour without customers noticing.
Why Infrastructure Matters
I recently audited a setup where a client tried to run a high-transaction Postgres DB on a budget VPS with shared CPU time (Steal Time was hitting 20%). The migration failed three times because the target CPU couldn't process the replication stream fast enough.
This is why we lean on CoolVDS for these workloads. You get dedicated resources. When you are replaying binary logs, you need raw single-core speed and NVMe throughput. Without that, your replication lag will never hit zero, and you'll be stuck in migration purgatory forever.
Comparison: Standard HDD VPS vs CoolVDS NVMe
| Metric | Standard Cloud Storage | CoolVDS NVMe |
|---|---|---|
| Random Read IOPS | ~500 - 3,000 | ~50,000+ |
| Latency | 2-5ms | 0.1ms |
| Import Time (100GB) | 45 Minutes | 8 Minutes |
Final Thoughts
Moving data back to Norway isn't just about ticking a compliance box for the Datatilsynet. It's about performance. Being physically closer to the Norwegian Internet Exchange (NIX) means your local users get a snappier experience. Just make sure your destination server can handle the heat.
Don't let slow I/O kill your SEO or your uptime. Spin up a test instance, run your rsync benchmarks, and see the difference real hardware makes.