Console Login

Database Migration Patterns: From Nightmare to Zero-Downtime on Linux

Database Migration Patterns: From Nightmare to Zero-Downtime on Linux

I once watched a junior admin try to migrate a 2TB Magento database using a GUI tool over a standard FTP connection. It failed at 98%. The client was down for 14 hours, the CTO was screaming, and the coffee machine was broken. That was a Tuesday. I don't like Tuesdays like that.

Database migration is the single most stressful task in systems administration. It’s open-heart surgery on your infrastructure. If you mess up a web server config, you restart Nginx. If you corrupt the `ibdata1` file during a migration, you are looking at resume-generating event.

With GDPR (General Data Protection Regulation) fully enforceable as of May this year, the stakes in Europe—and specifically here in Norway under Datatilsynet's watchful eye—are higher than ever. You can't just lose customer data; it's now illegal to be incompetent.

This guide isn't about theory. It's about how we move terabytes of data across the wire while keeping the application alive. We will look at the exact commands, the configurations, and the hardware requirements necessary to pull this off.

The Hardware Reality: Why Your Host Matters

Before we touch a single config file, we need to talk about IOPS (Input/Output Operations Per Second). During a migration, you are essentially reading the entire disk on the source and writing it to the destination as fast as physics allows.

If you are migrating to a cheap VPS provider using spinning rust (HDD) or shared SATA SSDs, your migration will crawl. You will hit I/O wait, your load averages will spike, and the sync will take so long that the data will be stale by the time it finishes.

Pro Tip: Always verify the underlying storage technology. At CoolVDS, we standardized on NVMe storage for our KVM instances because the random read/write speeds are roughly 6x faster than standard SATA SSDs. When catching up replication lag, this difference is what allows you to cut over in seconds rather than minutes.

Strategy 1: The "Dump and Load" (For < 10GB DBs)

If your database is small and you can afford 30 minutes of downtime at 3 AM on a Sunday, keep it simple. Complexity is the enemy of reliability.

However, simply piping `mysqldump` over SSH is rookie territory if you don't use the right flags. You need consistency.

The Command

mysqldump --single-transaction --quick --routines --triggers --compress --hex-blob -u root -p source_db | ssh user@target-ip "mysql -u root -p target_db"

Let's break down why this specific combination is necessary for 2018 standards:

  • --single-transaction: Essential for InnoDB. It starts a transaction before dumping, ensuring a consistent state without locking the tables.
  • --quick: Forces `mysqldump` to retrieve rows from the server a row at a time rather than retrieving the whole row set and buffering it in memory before writing it out.
  • --compress: Reduces network bandwidth usage, crucial if you are moving data between Oslo and a backup DC in Germany.

Strategy 2: The Master-Slave Replication (Zero Downtime)

For anything larger than 10GB, or for high-availability applications where downtime means lost revenue, you must use replication. The goal is to make the new CoolVDS server a "Slave" of your old server, let it catch up, and then promote it to "Master."

Step 1: Configure the Source (Old Server)

Edit your `my.cnf` (usually in `/etc/mysql/`). You need to enable binary logging and set a unique server ID.

[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 10
max_binlog_size = 100M

After restarting MySQL, create a replication user. Do not use root.

CREATE USER 'replicator'@'%' IDENTIFIED BY 'StrongPassword_2018!'; GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%'; FLUSH PRIVILEGES;

Step 2: Snapshot the Data

We need the current data and the exact coordinates of the binary log file at the moment of the snapshot. We use `Percona XtraBackup` for this on larger systems, but for this example, we will stick to standard mysqldump with the master data flag.

mysqldump --all-databases --single-transaction --master-data=2 > dbdump.sql

The --master-data=2 flag is magic. It comments the change master command inside the dump file so you know exactly where the log file was positioned.

Step 3: Import and Sync on Destination (CoolVDS)

On your new high-performance instance:

# Import the dump
mysql < dbdump.sql

# Configure Slave (Use the coordinates found in the head of dbdump.sql)
CHANGE MASTER TO
MASTER_HOST='192.168.1.100',
MASTER_USER='replicator',
MASTER_PASSWORD='StrongPassword_2018!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS= 107;

START SLAVE;

Check the status. You want to see Seconds_Behind_Master: 0.

SHOW SLAVE STATUS\G

Performance Tuning: Avoiding the "Thundering Herd"

When you start MySQL 5.7 (or the new MySQL 8.0) on a fresh server, the default configurations are often laughable, intended for systems with 512MB RAM. If you have deployed a CoolVDS instance with 16GB or 32GB of RAM, you need to tell the database to use it.

The most critical setting is the InnoDB Buffer Pool. This should be set to 70-80% of your total available RAM on a dedicated database server.

[mysqld]
# For a 16GB RAM Server
innodb_buffer_pool_size = 12G
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 1 # ACID compliant. Set to 2 for speed if you accept 1 sec data loss.
innodb_flush_method = O_DIRECT
max_connections = 500

If you don't set innodb_flush_method = O_DIRECT on Linux, the OS will cache the data that the database is also caching. This "double buffering" wastes RAM and CPU cycles.

The Norwegian Context: Latency and Jurisdiction

When migrating clients targeting the Nordic market, physical location is a technical constraint, not just a preference. The round-trip time (RTT) between a server in Oslo and a user in Tromsø is significantly better than routing through Frankfurt.

Furthermore, under GDPR Article 44, transferring data outside the EEA (European Economic Area) requires specific safeguards. Hosting within Norway simplifies compliance significantly. CoolVDS infrastructure is built to ensure data sovereignty remains intact, satisfying even the strict requirements of Norwegian financial and health sectors.

Final Cutover Checklist

When you are ready to switch:

  1. Lower the TTL (Time To Live) on your DNS records to 300 seconds 24 hours prior.
  2. Put the application in "Maintenance Mode" (or Read-Only).
  3. Verify SHOW SLAVE STATUS shows zero lag.
  4. Point the application config to the new CoolVDS IP.
  5. Stop the Slave on the new server and run RESET SLAVE ALL;.
  6. Open the gates.

Migrations don't have to be disasters. They are simply a test of your preparation and your platform's capabilities. If your current host chokes on I/O wait during a simple backup, they aren't going to survive a full-scale migration.

Ready to upgrade your infrastructure? Don't let slow I/O kill your SEO or your uptime. Deploy a test instance on CoolVDS in 55 seconds and feel the difference NVMe makes.