Console Login

Zero-Downtime Database Migration: A Survival Guide for Norwegian Sysadmins

Zero-Downtime Database Migration: A Survival Guide for Norwegian Sysadmins

There are two types of sysadmins: those who have accidentally corrupted a database during a migration, and those who are lying. I’ve been the former. In 2018, I watched a 500GB e-commerce database enter a split-brain state during a poorly planned cutover. The downtime lasted four hours. The revenue loss was nauseating. I swore never again.

Moving data is easy. Moving data while thousands of users are actively writing to it—without them noticing—is art. Whether you are fleeing a restrictive legacy host or simply upgrading to better hardware, the physics remain the same: latency, I/O throughput, and consistency.

In this guide, we aren't discussing SaaS drag-and-drop tools. We are looking at raw, terminal-level strategies to migrate MySQL/MariaDB and PostgreSQL workloads within the Norwegian ecosystem, ensuring compliance with Datatilsynet and maximum uptime.

The Pre-Flight Check: Infrastructure Matters

Before you even type ssh, look at your destination. A migration is the perfect time to fix infrastructure bottlenecks. If you are moving to a standard HDD VPS, you are bottlenecking your restore speed before you begin.

For database workloads, IOPS are the only metric that truly counts. We benchmarked a standard pg_restore on a SATA SSD versus a CoolVDS NVMe instance. The difference wasn't percentage points; it was logarithmic. The NVMe drive saturated the CPU before the disk queue filled up. That is what you want.

Pro Tip: If your target server is in Oslo (like CoolVDS) and your source is in Frankfurt, test your latency.
ping -c 100 oslo-gw.coolvds.com
If you see packet loss > 0.1%, do not attempt live replication over the public internet without a VPN or a dedicated tunnel. The TCP retransmits will kill your replication lag.

Strategy 1: The "Dump and Pump" (Small Datasets)

If your database is under 5GB and you can afford 10 minutes of maintenance mode, don't overengineer it. Use standard tools, but use them correctly. Most people forget the --single-transaction flag, locking their MyISAM tables (if they are unfortunate enough to still have them) or causing InnoDB waits.

MySQL / MariaDB

Do not just run a naked mysqldump. Compress the stream to save bandwidth, especially if transferring between data centers.

# On the Source Server
mysqldump -u root -p \
  --single-transaction \
  --quick \
  --routines \
  --triggers \
  --databases production_db | gzip > dump_$(date +%F).sql.gz

# Transfer via SCP
scp -P 22 dump_2020-05-11.sql.gz user@185.xxx.xxx.xxx:/tmp/

On the destination (CoolVDS), unzip and import. To speed this up, temporarily disable the binary log and commit checks in your my.cnf or via global session variables. Just remember to re-enable them immediately after.

-- Inside MySQL shell on Destination
SET GLOBAL autocommit=0;
SET GLOBAL unique_checks=0;
SET GLOBAL foreign_key_checks=0;
SOURCE /tmp/dump_2020-05-11.sql;
COMMIT;
SET GLOBAL unique_checks=1;
SET GLOBAL foreign_key_checks=1;

Strategy 2: Master-Slave Replication (Zero Downtime)

For anything critical or larger than 10GB, you cannot afford the downtime of a dump/restore. You need replication. This allows you to sync the data to the CoolVDS instance while the old server is still live. Once the "Seconds_Behind_Master" hits zero, you switch your application config.

Step 1: Configure the Source

Edit /etc/mysql/my.cnf on the current live server. You need a unique server ID and binary logging enabled.

[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = production_db
bind-address = 0.0.0.0 # Or specific IP

Restart MySQL. Then create a replication user. Do not use root.

CREATE USER 'replicator'@'%' IDENTIFIED BY 'StrongPassword!2020';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;

Step 2: Get the Coordinates

You need a snapshot of the current state. If you use mysqldump with --master-data=2, it will include the binary log coordinates in the dump file automatically.

mysqldump -u root -p --master-data=2 --single-transaction --databases production_db > master_dump.sql

Open the head of that file:

head -n 30 master_dump.sql | grep "CHANGE MASTER"

You will see something like: CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=543;. Keep this safe.

Step 3: Configure the Destination (CoolVDS)

Import the dump to your new, fast NVMe instance. Then, configure it as a slave.

CHANGE MASTER TO
  MASTER_HOST='10.10.1.5', -- IP of old server
  MASTER_USER='replicator',
  MASTER_PASSWORD='StrongPassword!2020',
  MASTER_LOG_FILE='mysql-bin.000004',
  MASTER_LOG_POS=543;
  
START SLAVE;

Now, watch the magic happen:

SHOW SLAVE STATUS\G

Look for Slave_IO_Running: Yes and Slave_SQL_Running: Yes. If your new host has superior I/O (which it should), it will catch up quickly.

The Norwegian Context: GDPR and Data Sovereignty

We are operating in May 2020. The legal landscape regarding data transfer outside the EEA is... complicated. While the Privacy Shield is currently in effect, scrutiny is high. Keeping your database on Norwegian soil isn't just about latency to NIX; it's about risk mitigation.

When you migrate to CoolVDS, you are ensuring the physical storage of that data remains under Norwegian jurisdiction. This simplifies your Article 30 records for GDPR. If you are migrating from a US-based cloud provider, ensure you scrub the old drives. A simple rm -rf is not enough. Use shred if you have shell access, or trigger the provider's secure wipe functionality.

Post-Migration Tuning

A fresh installation of MySQL or PostgreSQL usually comes with default settings that assume it's running on a calculator. You have moved to a VPS with dedicated resources; use them.

PostgreSQL Tuning

Edit postgresql.conf. The default shared_buffers is often pitifully low (128MB). Set this to 25% of your total RAM.

# Example for a 8GB RAM Instance
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 64MB
maintenance_work_mem = 512MB

Also, switch your disk scheduler. Inside your KVM guest, check the scheduler:

cat /sys/block/vda/queue/scheduler
# [mq-deadline] kyber bfq none

On virtualized NVMe storage, none or mq-deadline often yields the best throughput because the hypervisor handles the physical scheduling.

Final Cutover

When you are ready to switch:

  1. Lower the TTL on your DNS records to 300 seconds a day before.
  2. Put your application in "Read Only" mode (or show a maintenance page).
  3. Stop the application server.
  4. Verify the slave has processed all relay logs (Seconds_Behind_Master = 0).
  5. Promote the slave to master: STOP SLAVE; RESET MASTER;
  6. Update your application config to point to the CoolVDS IP.
  7. Start the application.

If you've done your homework, the downtime is the time it takes to restart your app server—usually seconds. Don't let slow I/O or bad routing kill your SEO or your user experience. Migration is a chore, but running on high-performance infrastructure makes the rest of your year a lot easier.

Ready to upgrade? Spin up a high-performance KVM instance in Oslo today.