Surviving the Outage: Building a Bulletproof Multi-Provider Strategy in 2014
Let’s be honest with each other. If you believe the "100% Uptime Guarantee" stamped on your hosting provider's footer, you haven't been in this industry long enough to smell the burning silicon. I've spent the last decade watching servers panic, RAID cards fail silently, and transatlantic fiber cables get severed by fishing trawlers. When your primary data center goes dark, that SLA refund won't recover your reputation or your lost revenue.
In 2014, relying on a single vendor—no matter how big—is negligence. The buzzword in the boardroom is "Hybrid Cloud," but for us in the trenches, it’s simply about redundancy. It’s about not putting all your eggs in one rack.
Today, I’m going to show you how to architect a robust, multi-provider setup. We will use CoolVDS in Norway as our secure, low-latency performance hub, and failover effectively to a secondary location without losing data integrity. We aren't just talking theory; we are looking at haproxy.cfg, MySQL replication, and the cold hard reality of network latency.
The Architecture: The "Norwegian Fortress" Approach
Why center the architecture in Norway? Two reasons: Latency and Datatilsynet. If your customers are in Oslo, Bergen, or Trondheim, serving them from a data center in Virginia or even Frankfurt adds measurable milliseconds to the Time To First Byte (TTFB). More importantly, with the current scrutiny on Safe Harbor and data privacy, keeping your user database on Norwegian soil ensures you comply with the Personopplysningsloven (Personal Data Act).
Here is the battle plan:
- Primary Node (Master): CoolVDS High-Performance SSD Instance (Oslo). Handles all writes and primary read traffic.
- Secondary Node (Hot Spare): A commodity VPS provider (e.g., Frankfurt or London) for disaster recovery.
- The Glue: OpenVPN tunnels and HAProxy for load balancing.
Pro Tip: Never run database replication over the public internet without encryption. I’ve seen packet sniffers capture SQL streams in the wild. Always wrap your replication traffic in an OpenVPN tunnel or use SSL enforcement in MySQL.
Step 1: The Network Bridge (OpenVPN)
Before we sync data, we need a secure private network spanning our providers. We’ll set up the CoolVDS instance as the OpenVPN server.
First, install OpenVPN on your CentOS 6 server:
yum install openvpn easy-rsa -y
cp -R /usr/share/easy-rsa/2.0/* /etc/openvpn/easy-rsa/
cd /etc/openvpn/easy-rsa/
source ./vars
./clean-all
./build-ca
./build-key-server server
./build-dh
Configure /etc/openvpn/server.conf to handle the tunnel. We want a persistent connection that reconnects automatically if the network flutters.
port 1194
proto udp
dev tun
ca keys/ca.crt
cert keys/server.crt
key keys/server.key
dh keys/dh1024.pem
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
keepalive 10 120
comp-lzo
persist-key
persist-tun
status openvpn-status.log
verb 3
Start the service with service openvpn start. Once your client (the secondary VPS) connects, you have a secure private IP range (10.8.0.x) to route your traffic.
Step 2: Database Replication (MySQL 5.5)
Latency kills synchronous replication over WAN. Unless you have a dark fiber line between providers (you don't), do not try to use MySQL Cluster or Galera across different providers. The latency variance will lock your tables. Instead, we stick to standard asynchronous Master-Slave replication.
On your CoolVDS Master, edit /etc/my.cnf. We need to enable the binary log and set a unique server ID.
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Replication Settings
log-bin=mysql-bin
server-id=1
innodb_flush_log_at_trx_commit=1
sync_binlog=1
innodb_buffer_pool_size=2G # Adjust based on your CoolVDS RAM
Create the replication user. Note that we restrict access to the VPN subnet for security:
CREATE USER 'repl'@'10.8.0.%' IDENTIFIED BY 'StrongPassword123!';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.8.0.%';
FLUSH PRIVILEGES;
On the secondary slave, set server-id=2 and point it to the master:
CHANGE MASTER TO
MASTER_HOST='10.8.0.1',
MASTER_USER='repl',
MASTER_PASSWORD='StrongPassword123!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS= 107;
Step 3: Intelligent Routing with HAProxy
Now, how do we route traffic? We place HAProxy nodes at the edge. In a perfect world, you'd use Anycast DNS, but for most setups in 2014, a DNS Failover service (like DNS Made Easy) pointing to HAProxy is sufficient.
Here is a battle-tested haproxy.cfg snippet that checks backend health and ensures users stick to the active node:
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
acl is_norway_down nbsrv(norway_primary) lt 1
use_backend backup_node if is_norway_down
default_backend norway_primary
backend norway_primary
mode http
balance roundrobin
option httpchk HEAD /health_check.php HTTP/1.1\r\nHost:\ localhost
server coolvds_oslo 127.0.0.1:8080 check inter 2000 rise 2 fall 3
backend backup_node
mode http
option httpchk HEAD /health_check.php HTTP/1.1\r\nHost:\ localhost
# Route over VPN to secondary provider if local app fails
server secondary_eu 10.8.0.2:80 check inter 2000
The Reality of IOPS and "Noisy Neighbors"
Configuration is only half the battle. The hardware running underneath your virtualization layer dictates your stability. This is where most generic cloud providers fail. They oversell their CPUs and put you on shared spinning rust (HDDs) where one neighbor doing a backup kills your database performance.
In our benchmarks, CoolVDS instances utilizing pure SSD storage consistently deliver the I/O throughput required for heavy InnoDB write workloads. When your master database is taking 500 writes per second, you cannot afford the I/O wait times inherent in legacy VPS hosting. A multi-cloud strategy is useless if your primary node is sluggish.
Handling the Failover
If the unthinkable happens and the connection to Oslo drops, HAProxy will detect the failed health check (3 failed attempts within 6 seconds based on our config) and route traffic to the backup backend.
Warning: Automated failover for the database is risky. It is better to accept a few minutes of downtime to manually promote the Slave to Master than to risk a "Split Brain" scenario where both servers think they are the master. Use your monitoring tools (Nagios or Zabbix) to alert you, then run the promotion script.
Conclusion
Building a multi-provider setup in 2014 isn't just for the Netflixes of the world. It is accessible to any system administrator willing to edit a few config files. By anchoring your infrastructure on a high-performance, compliant platform like CoolVDS and bridging it securely to a backup provider, you achieve the holy grail: data sovereignty and disaster resilience.
Don't wait for the next fiber cut to realize your redundancy plan was just a PowerPoint slide. Spin up a CoolVDS SSD VPS today and start building your fortress.