Hope Is Not A Strategy: Architecting Failover for Norwegian Infrastructure
Let’s be honest. If your disaster recovery plan is "I make backups sometimes," you are already down. You just don't know it yet.
I have spent the last week rebuilding a client's infrastructure after a catastrophic RAID controller failure at a budget provider in Frankfurt. They lost 14 hours of transactional data. Why? Because they trusted a single VPS instance with their entire livelihood. In 2014, with the tools we have available, that is inexcusable.
This isn't about buying the most expensive hardware. It's about architecture. It's about recognizing that in a world of virtualization, redundancy is the only reality. Here in Norway, we face unique challenges—from data sovereignty concerns following the Snowden leaks to the simple physics of latency to the NIX (Norwegian Internet Exchange).
The "Single Point of Failure" Myth
Many developers spin up a droplet or a slice, install a LAMP stack, and call it a day. They assume the host node is immortal. It is not. Hard drives adhere to physics, not your project deadline. When we built the infrastructure for CoolVDS, we specifically chose KVM (Kernel-based Virtual Machine) over OpenVZ for this very reason. In a containerized environment (OpenVZ), a kernel panic on the node takes everyone down. With KVM, you have full isolation. But even KVM cannot save you if the physical RAM stick errors out.
The 3-2-1 Rule is Obsolete. You Need Hot Standbys.
Traditional backup wisdom says: 3 copies of data, 2 different media, 1 offsite. That’s fine for archiving. It is useless for business continuity. If your e-commerce site goes dark, you cannot wait 4 hours to restore a tarball from an FTP server.
Pro Tip: Data sovereignty is critical. Under the Norwegian Personal Data Act (Personopplysningsloven), you are responsible for where your customer data lives. Hosting in Oslo isn't just about latency; it's about legal compliance. Don't rely on "Safe Harbor" blindly.
Technical Implementation: The Hot-Failover Stack
Here is the exact setup I deploy for mission-critical Norwegian clients. We use a primary node in Oslo (CoolVDS Zone A) and a secondary node (CoolVDS Zone B or a distinct physical host).
1. Database Replication (MySQL 5.5/5.6)
Forget standard backups for the moment. You need Master-Slave replication. If the Master dies, the Slave is ready to take reads immediately, and with a quick config change, writes.
On the Master server, edit your /etc/mysql/my.cnf:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = production_db
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1The sync_binlog = 1 flag is crucial. It forces a sync to disk on every transaction. It hurts I/O performance slightly (which is why we use high-performance SSD arrays at CoolVDS), but it guarantees data consistency.
On the Slave server:
[mysqld]
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
log_bin = /var/log/mysql/mysql-bin.log
read_only = 1Once configured, you can verify the stream status with a simple command:
mysql -u root -p -e "SHOW SLAVE STATUS\G"Look for Slave_IO_Running: Yes and Slave_SQL_Running: Yes. If either is No, you are flying blind.
2. IP Failover with Keepalived
DNS propagation takes too long. Even with a low TTL (Time To Live), some ISPs ignore it. The professional solution is VRRP (Virtual Router Redundancy Protocol). We use keepalived to float a Virtual IP (VIP) between your two CoolVDS instances.
Install it easily on Ubuntu 14.04 LTS:
apt-get install keepalivedHere is a robust /etc/keepalived/keepalived.conf for the Master:
vrrp_script chk_mysql {
script "/usr/bin/killall -0 mysqld"
interval 2
weight 2
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass s3cr3tp@ss
}
virtual_ipaddress {
192.168.10.50
}
track_script {
chk_mysql
}
}If MySQL crashes on the Master, the priority drops, and the Slave (configured with priority 100) instantly claims the IP address 192.168.10.50. Your application servers just point to the VIP and never know the difference.
File Synchronization: Rsync is Your Best Friend
Databases are hard, but files are easy—if you script it right. Do not rely on manual uploads. Use a cron job to sync user uploads (images, PDFs) from Master to Slave.
Create a script /root/sync_media.sh:
#!/bin/bash
# Sync web contents from Master to Slave
# Flags: -a (archive), -v (verbose), -z (compress), --delete (remove files on dest that are gone on source)
SOURCE_DIR="/var/www/html/uploads/"
DEST_HOST="10.0.0.2"
DEST_DIR="/var/www/html/uploads/"
rsync -avz --delete -e "ssh -i /root/.ssh/id_rsa_backup" $SOURCE_DIR root@$DEST_HOST:$DEST_DIR
# Check exit status
if [ $? -eq 0 ]; then
echo "[$(date)] Sync successful" >> /var/log/sync_media.log
else
echo "[$(date)] Sync FAILED" >> /var/log/sync_media.log
fiMake it executable:
chmod +x /root/sync_media.shAnd add it to your crontab to run every 5 minutes:
*/5 * * * * /root/sync_media.shLatency Matters: The Oslo Advantage
When you replicate data, latency is the enemy. Replicating from Oslo to Amsterdam introduces 15-20ms of lag. Replicating from Oslo to a US East Coast server introduces 100ms+. This lag can cause your slave to drift behind the master.
By keeping both your primary and failover nodes within the CoolVDS Norwegian infrastructure, you utilize our local gigabit backbone. Latency between our nodes is often sub-millisecond. This ensures your Seconds_Behind_Master metric stays at 0.
| Location A | Location B | Ping (Avg) | Replication Lag Risk |
|---|---|---|---|
| Oslo | Oslo (CoolVDS) | < 1ms | Negligible |
| Oslo | Frankfurt | ~25ms | Low |
| Oslo | New York | ~110ms | High |
The Verdict
Hardware is cheap. Downtime is expensive. In 2014, your reputation rests on your uptime. You don't need a team of 20 engineers to build a resilient system; you need Linux fundamentals and a hosting partner that gives you the raw access to implement them.
At CoolVDS, we don't oversell our CPU cores, and we don't hide iowait stats from you. We give you the high-performance SSD storage and the pure KVM virtualization you need to build the architecture I just described.
Stop hoping your server won't crash. Engineer it so it doesn't matter if it does. Check your redundancy status today.
ping -c 4 coolvds.no