Database Migration to Norwegian Soil: The Zero-Downtime Protocol
I have seen grown men cry over a corrupted .ibd file. It usually happens around 3:00 AM, fueled by cold coffee and the horrifying realization that mysqldump locked the production tables for forty minutes. When you are moving a 500GB production database from a generic hyperscaler to a sovereign Norwegian VPS, "hope" is not a strategy. It is a liability.
As we navigate Q2 of 2022, the pressure on Norwegian CTOs and Systems Architects is twofold. First, the legal landscape following the Schrems II ruling has made data residency in Norway not just a "nice to have," but a compliance necessity for handling citizen data. Second, the physics of latency. If your users are in Oslo, Bergen, or Trondheim, serving queries from Frankfurt or Dublin is wasted milliseconds.
This is not a marketing brochure. This is a technical playbook for executing a master-slave replication migration to CoolVDS infrastructure without killing your uptime.
The Hardware Reality: NVMe or Nothing
Before we touch a single config file, let's address the bottleneck that kills 90% of migrations: I/O Wait. Importing millions of rows requires massive write throughput. If you attempt this on a standard VPS with network-attached storage (ceph) or, god forbid, spinning rust, your CPU will spend all its time waiting for the disk.
Pro Tip: Always verify the underlying storage before starting a migration. Run fio. If you aren't seeing random write IOPS above 15k, abort. We build CoolVDS instances on local NVMe arrays precisely because database ingestion tends to saturate anything less.
Phase 1: The Tunnel (Security First)
Never expose your database port (3306 or 5432) to the public internet during migration. It invites brute force attacks and risks data interception. In 2022, with the rise of sophisticated scanning bots, this is negligence.
Use an SSH tunnel to create a secure, encrypted pipeline between your old server (Source) and your new CoolVDS instance (Target).
# Run this on the Source server
ssh -N -L 3307:127.0.0.1:3306 user@coolvps-target-ip -i ~/.ssh/id_rsa
Now, traffic sent to local port 3307 on the Source is securely forwarded to port 3306 on the Target.
Phase 2: MySQL 8.0 / MariaDB Strategy
If you are using mysqldump for anything larger than 10GB, you are doing it wrong. It is slow, single-threaded (mostly), and locks tables. The industry standard for 2022 is Percona XtraBackup (for MySQL) or Mariabackup.
1. The Physical Backup
This copies the binary files directly, which is significantly faster than SQL dumping.
# On Source (Install Percona XtraBackup 8.0 first)
xtrabackup --backup --target-dir=/data/backups/ --datadir=/var/lib/mysql --user=root --password=YOURPASS
2. Transfer and Prepare
Use rsync to move the artifacts. Itβs resume-capable, which saves your life if the connection drops.
rsync -avzP /data/backups/ user@coolvps-target-ip:/data/backups/
3. Target Configuration (The Secret Sauce)
Before starting the new database on CoolVDS, you must tune the my.cnf for the restoration process. The default settings in Ubuntu 20.04 are too conservative for modern NVMe hardware.
Edit /etc/mysql/my.cnf on the Target:
[mysqld]
# Allocate 70-80% of RAM to the pool.
# For a 16GB CoolVDS Plan, use 12G.
innodb_buffer_pool_size = 12G
# Critical for NVMe performance
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
# TEMPORARY: Relax ACID compliance for import speed
# REMEMBER TO SET BACK TO 1 AFTER MIGRATION
innodb_flush_log_at_trx_commit = 2
# Redo log size - heavily impacts write speed in MySQL 8
innodb_log_file_size = 2G
Phase 3: PostgreSQL 14 Strategy
PostgreSQL users have it slightly more complex due to the strictness of WAL (Write-Ahead Logging). However, PG 14 (current stable) offers robust logical replication.
Ensure your Source postgresql.conf has these settings required for replication:
wal_level = logical
max_replication_slots = 10
max_wal_senders = 10
You must also whitelist the Target IP in pg_hba.conf. Since we are tunneling or using a private network, you can be specific:
host replication replicator 10.0.0.5/32 md5
Phase 4: The Switchover (Point of No Return)
Once your replication is running, you will have two identical databases. The Target is chasing the Source with perhaps a few milliseconds of lag. Now comes the human element.
- Lower TTL: 24 hours before migration, set your DNS TTL for your database endpoint (e.g.,
db.yourdomain.no) to 60 seconds. - Maintenance Mode: Put your application in a read-only state if possible, or display a maintenance page. This stops new writes.
- Verify Checksums: Run a
CHECKSUM TABLEon critical tables to ensure data integrity. - Promote Target: Stop the slave thread on the CoolVDS instance.
STOP SLAVE;
RESET SLAVE ALL;
Update your application config to point to the new IP. If you are hosting your app on CoolVDS as well (recommended for internal networking speeds), simply update the internal IP.
Why Location Matters: The NIX Advantage
Network topology is often ignored in migration plans. CoolVDS is peered directly at NIX (Norwegian Internet Exchange). When your database resides here, latency to local ISPs (Telenor, Telia) is negligible.
| Route | Avg Latency (ms) | Impact |
|---|---|---|
| Oslo -> AWS Frankfurt | ~35ms | Noticeable delay on complex queries |
| Oslo -> CoolVDS Oslo | ~1-2ms | Instantaneous application response |
Final Thoughts
Migrations expose the weak points in your infrastructure. If your provider throttles IOPS or has "noisy neighbors" stealing CPU cycles, your database import will crawl. In the Nordic market, where reliability is the currency of trust, you cannot afford opaque resource limits.
Data sovereignty is not just about avoiding Datatilsynet fines; it's about owning your infrastructure stack. When you are ready to stop renting space on someone else's computer and start running bare-metal performance with virtualization flexibility, we are here.
Don't let slow I/O kill your SEO. Deploy a high-performance NVMe instance on CoolVDS today and see the benchmark difference yourself.