Zero-Downtime Database Migration: The Norwegian DevOps Survival Guide
There is nothing quite like the specific dread of a database migration at 02:00 CET. The traffic is low, the coffee is black, and you are staring at a progress bar that hasn't moved in ten minutes. If you are reading this, you probably know that the phrase "maintenance window" is a lie. In the modern SaaS environment, downtime is not an option; it is a breach of contract.
I have spent the last decade moving data across the Nordic infrastructure map, from legacy bare metal in basement server rooms to modern cloud clusters. The challenge remains the same: moving terabytes of data without data corruption, without packet loss, and without the users noticing a thing.
This guide ignores the theoretical fluff. We are looking at a battle-tested strategy to migrate a high-load MySQL/MariaDB workload from a legacy provider to a high-performance CoolVDS KVM instance in Oslo. We prioritize data integrity, GDPR compliance (Schrems II requires we know exactly where those bits live), and raw I/O performance.
The Architecture of the Move
Forget the "dump and restore" method for anything larger than 5GB. It takes too long. If your dataset is 500GB and your disk I/O is saturated, your downtime will stretch into hours. The only professional path is Replication-Based Migration.
The concept is simple but strict:
- Establish the Target: A CoolVDS instance running a fresh, optimized database engine.
- Initial Sync: Transfer a consistent snapshot without stopping the Master.
- Replication Catch-up: The Target applies the binary logs from the Master to sync up.
- The Cutover: A sub-second switch where the application points to the Target.
Why Infrastructure Matters Here
I cannot stress this enough: Virtualization overhead kills database imports. When you are replaying millions of transactions from binary logs, you need high IOPS (Input/Output Operations Per Second). Container-based VPS solutions often suffer from "noisy neighbors"—if another user on the node is compiling a kernel, your import crawls.
We use CoolVDS because KVM provides strict resource isolation. More importantly, the underlying storage is NVMe. When writing heavily during the catch-up phase, the difference between standard SSD and NVMe is the difference between a 1-hour lag and a 5-minute lag.
Step 1: Preparing the Kernel (Target)
Before installing the database software on your fresh CoolVDS instance, tune the Linux kernel for network throughput. We are moving data across the wire; default TCP settings are usually too conservative.
Edit /etc/sysctl.conf:
# Increase TCP window sizes for high-bandwidth transfers
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Enable BBR congestion control (available in kernels 4.9+)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
Apply with sysctl -p. BBR is essential for maintaining throughput if you are migrating over the public internet, mitigating packet loss effects.
Step 2: The Master Configuration
On your source server (the old one), ensure replication is possible. If you are running MySQL 8.0 or MariaDB 10.6+, check your my.cnf. You need a unique server ID and binary logging enabled.
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
Pro Tip: Ensurebinlog_formatis set toROW. Statement-based replication can break if you use non-deterministic functions (like `UUID()` or `NOW()`) during the migration window.
Step 3: The Snapshot (No Downtime)
We use mysqldump with the --single-transaction flag. This ensures a consistent snapshot without locking the tables (for InnoDB). We also need the exact binary log position to start replication later.
mysqldump -u root -p \
--single-transaction \
--quick \
--master-data=2 \
--routines \
--triggers \
--events \
--all-databases \
| gzip > /tmp/full_dump_2025.sql.gz
While this dump is running, your application is still live. Users are still writing data. That is fine. The dump contains the state at the start of the command, and the master log coordinates are written at the top of the file.
Step 4: Transfer and Import
Move the data to your CoolVDS instance. Use rsync so you can resume if the connection drops.
rsync -avz --progress /tmp/full_dump_2025.sql.gz user@target-coolvds-ip:/tmp/
Crucial Optimization: Before importing on the Target, temporarily relax the ACID compliance to speed up the write process. This is safe because this is not yet the production master.
Edit the Target's my.cnf:
[mysqld]
innodb_flush_log_at_trx_commit = 2
innodb_doublewrite = 0
sync_binlog = 0
(Remember to revert these to 1 after the migration is complete for data safety!)
Import the data:
zcat /tmp/full_dump_2025.sql.gz | mysql -u root -p
Step 5: Catching Up (Replication)
Once the import is done, inspect the dump file to find the log coordinates:
zcat /tmp/full_dump_2025.sql.gz | head -n 50 | grep "CHANGE MASTER"
You will see something like:
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000452', MASTER_LOG_POS=894321;
Log into your CoolVDS MySQL console and configure it as a slave:
CHANGE MASTER TO
MASTER_HOST='old_server_ip',
MASTER_USER='replication_user',
MASTER_PASSWORD='secure_password',
MASTER_LOG_FILE='mysql-bin.000452',
MASTER_LOG_POS=894321;
START SLAVE;
Check the status with SHOW SLAVE STATUS\G. Watch Seconds_Behind_Master. It should start high and rapidly decrease to 0. Thanks to the NVMe storage on CoolVDS, the I/O bottleneck that usually plagues this step is virtually non-existent.
Step 6: The Cutover
When Seconds_Behind_Master is 0, your CoolVDS instance is a real-time mirror of your production data. Now you plan the switch.
- Stop Writes: Put your application in maintenance mode or set the Old Master to read-only (`SET GLOBAL read_only = ON;`).
- Verify Sync: Ensure the slave has processed the final transaction.
- Promote Target: Stop the slave on CoolVDS (`STOP SLAVE; RESET SLAVE ALL;`).
- Switch DNS/App Config: Point your application connection strings to the new CoolVDS IP.
Norwegian Compliance & Local Reality
Why bother hosting in Oslo? Aside from the obvious latency benefits (pings from Oslo to Oslo are often sub-2ms via NIX), there is the legal aspect. Datatilsynet is rigorous.
By keeping your database on Norwegian soil, you simplify GDPR compliance significantly compared to using US-owned hyperscalers where data sovereignty is legally murky. CoolVDS infrastructure is physically located in Norway, ensuring your customer data remains under European jurisdiction.
Summary
Database migration is risk management. You trade the time spent on preparation for the safety of your data. By using replication, you decouple the data transfer time from the downtime window.
| Migration Method | Downtime | Risk Level | Ideal For |
|---|---|---|---|
| Dump & Restore | Hours | High (Timeout risks) | Small Dev DBs (< 2GB) |
| Replication (Recommended) | Seconds | Low | Production Workloads |
Don't let slow hardware compromise your migration strategy. If you need a staging environment to test this workflow, spin up a high-performance instance today.
Ready to stabilize your infrastructure? Deploy a CoolVDS NVMe instance in Oslo now.