Console Login

Cloud Storage in 2010: Escaping the SAN Trap with High-Performance Virtualization

Cloud Storage in 2010: Escaping the SAN Trap with High-Performance Virtualization

The year 2009 has been defined by one buzzword: "Cloud." If you believe the marketing brochures from Silicon Valley, we are all supposed to throw our hardware into the ocean and run everything on Amazon EC2. But for those of us actually managing infrastructure in Oslo or managing data for Norwegian enterprises, the reality is far less fluffy.

We are standing on a precipice. The transition from physical dedicated servers to Virtual Dedicated Servers (VDS) is no longer a question of "if," but "how." The bottleneck, however, has shifted. It is no longer CPU cycles; Intel's Xeon 5500 series (Nehalem) has solved that. The new enemy is Disk I/O. As we look toward 2010, the ability to scale storage without buying a six-figure NetApp SAN is what will separate successful CTOs from the ones filling out bankruptcy forms.

The I/O Crisis: Why Spindles Still Matter

In a standard shared hosting environment, you are fighting for I/O operations per second (IOPS) with hundreds of other users. If one neighbor decides to run a poorly indexed MySQL query or a massive backup script, your application stalls. This is "iowait," and it is the silent killer of Web 2.0 applications.

Many providers try to sell you on capacity—500GB, 1TB! But capacity is cheap. Speed is expensive. To achieve the low latency required for a heavy Magento or Drupal installation, you cannot rely on standard 7.2k RPM SATA drives. You need speed.

Pro Tip: Always ask your provider about their RAID level. RAID 5 is dead for write-heavy databases due to parity calculation overhead. At CoolVDS, we strictly implement RAID 10 with 15k RPM SAS drives (or enterprise SSDs where available). This cuts write penalty and doubles read speed.

Architecting High Availability without the Hardware Cost

In the past, High Availability (HA) meant buying two physical servers and a shared physical storage array. That is a single point of failure (SPOF) that costs a fortune. In 2010, we move this to software.

The most robust solution available right now for Linux is DRBD (Distributed Replicated Block Device). Think of it as network-based RAID 1. It mirrors a block device between two servers via the network. If your primary node dies, the secondary takes over with zero data loss.

Configuration: Setting up DRBD on CentOS 5.3

Here is a real-world configuration we deployed last week for a client needing redundancy between two CoolVDS instances. This setup assumes you have a dedicated partition /dev/sdb1 on both nodes.

# /etc/drbd.conf
global {
    usage-count yes;
}

common {
    syncer { rate 100M; } # Limit sync rate to avoid choking the link
}

resource mysql_data {
    protocol C; # Synchronous replication for zero data loss

    startup {
        wfc-timeout  15;    # Wait for connection 15 seconds
        degr-wfc-timeout 60;
    }

    disk {
        on-io-error   detach;
    }

    net {
        # If split-brain occurs, disconnect.
        after-sb-0pri disconnect;
        after-sb-1pri disconnect;
        after-sb-2pri disconnect;
        rr-conflict disconnect;
    }

    on node1.coolvds.no {
        device    /dev/drbd0;
        disk      /dev/sdb1;
        address   10.0.0.1:7788;
        meta-disk internal;
    }

    on node2.coolvds.no {
        device    /dev/drbd0;
        disk      /dev/sdb1;
        address   10.0.0.2:7788;
        meta-disk internal;
    }
}

Once configured, you initialize the metadata and start the service:

[root@node1 ~]# drbdadm create-md mysql_data
[root@node1 ~]# service drbd start
[root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary mysql_data

You now have a block device /dev/drbd0 that you can format as ext3 and mount. If Node 1 fails, Heartbeat (Linux-HA) can promote Node 2 and mount the drive automatically. This is how you build enterprise resilience on a VDS budget.

Data Sovereignty: The Norwegian Advantage

We cannot discuss cloud storage without addressing the elephant in the room: The USA Patriot Act. Since its enactment, any data hosted by a US company (even on servers located in Europe) could theoretically be accessed by US authorities without a warrant.

For Norwegian businesses, this is unacceptable. We operate under Personopplysningsloven (Personal Data Act). The Norwegian Data Inspectorate (Datatilsynet) is very clear about the responsibilities of data controllers.

Hosting your data on a platform like CoolVDS, which is legally and physically domiciled in Norway, ensures that your data remains under Norwegian jurisdiction. Our datacenters connect directly to NIX (Norwegian Internet Exchange) in Oslo. This doesn't just offer legal protection; it offers physics-based advantages.

Latency Comparison (Ping from Oslo)

Destination Average Latency Hops
CoolVDS (Oslo) 2-4 ms 3
Amazon EC2 (US-East) 110-140 ms 18+
Generic Host (Germany) 35-50 ms 12

For a database application, that 30ms difference to Germany accumulates with every query. If your PHP script runs 50 SQL queries to generate a page, that is 1.5 seconds of pure network lag added to your load time. Local hosting is not just patriotism; it is performance.

The CoolVDS Architecture: KVM and Hardware Isolation

Why do we outperform standard VPS providers? It comes down to the hypervisor. Many budget hosts use OpenVZ or Virtuozzo. These are "containers," not true virtualization. They share the host's kernel. If one tenant crashes the kernel, everyone goes down.

At CoolVDS, we are betting big on KVM (Kernel-based Virtual Machine). KVM was merged into the Linux kernel recently (2.6.20), and it allows us to treat every VDS as a true dedicated server. You get your own kernel, your own memory space, and thanks to Intel VT-x hardware support, near-native CPU performance.

We also tune our host nodes specifically for I/O throughput. Here is a snippet of how we optimize the I/O scheduler on our host nodes to prioritize latency over raw throughput, ensuring your interactive apps stay snappy:

# Set the scheduler to 'deadline' for lower latency on RAID arrays
echo deadline > /sys/block/sda/queue/scheduler

# Increase read-ahead for sequential reads
blockdev --setra 4096 /dev/sda

Preparing for 2010

The era of buying hardware is ending for most SMEs. But the era of managing infrastructure is just beginning. You do not need to own the spinning rust, but you do need to own the architecture.

Whether you are deploying a redundant Postfix mail cluster or a high-traffic Joomla site, the underlying storage technology dictates your stability. Do not settle for oversold SATA drives in a datacenter you can't place on a map.

Secure your data under Norwegian law, reduce your latency to single digits, and leverage the power of KVM. It is time to stop worrying about hard drive failures and start building the future.

Is your infrastructure ready for the new decade? Deploy a high-availability KVM instance on CoolVDS today and experience the difference of local, enterprise-grade storage.