The "Cloud" Promise vs. The I/O Reality
It is September 2009. Everyone from Amazon to Rackspace is talking about the "Cloud." The promise is seductive: infinite scalability, utility billing, and no hardware headaches. But if you have actually deployed a high-traffic MySQL database or a Magento store on these platforms, you likely discovered the ugly truth hiding behind the glossy marketing brochures.
Latency.
Most "Cloud" VPS providers today rely on massive, centralized Storage Area Networks (SANs). In theory, this provides redundancy. In practice, when your neighbor decides to run a backup or a massive compile job, your disk I/O wait times skyrocket. You aren't just sharing a CPU; you are fighting for spindle time on a disk array located three racks away.
The Anatomy of a Storage Bottleneck
I recently consulted for a media firm in Oslo migrating from physical Dell PowerEdge servers to a virtualized environment. Their web servers were fine, but their database crawled. The culprit wasn't RAM or CPUβit was I/O contention.
When we ran diagnostics, the CPU was 90% idle, but the application was unresponsive. Here is the command that revealed the truth:
vmstat 1
We saw the wa (wait) column consistently hitting 40-50%. That means the CPU spent half its time just waiting for the disk subsystem to return data. In a shared SAN environment, you have zero control over this.
Pro Tip: If you are running Linux (CentOS 5.3 or Debian Lenny), mount your filesystems with noatime. By default, Linux writes a timestamp every time a file is read. On a busy web server, this cuts your I/O throughput significantly.
Open your /etc/fstab and adjust your partition flags:
/dev/xvda1 / ext3 defaults,noatime,nodiratime 1 1
Local RAID: The Pragmatic Alternative
While the industry chases the SAN dream, the battle-hardened approach for 2010 is actually a step back to basics: Local, Hardware-Controlled RAID.
At CoolVDS, we deviate from the "centralized storage" trend. We use local RAID-10 arrays with 15k RPM SAS drives directly attached to the hypervisor node. Why? Because physics wins. Eliminating the network hop between the compute node and the storage node reduces latency from milliseconds to microseconds.
Performance Comparison
| Feature | Typical Cloud SAN | CoolVDS Local RAID-10 |
|---|---|---|
| Network Latency | Variable (High) | Zero (Local Bus) |
| Neighbor Impact | Severe | Isolated via Xen |
| Throughput | Shared 1Gbps link | Dedicated Controller Speed |
Data Sovereignty: The Norwegian Context
Beyond raw speed, there is the legal landscape. The Personopplysningsloven (Personal Data Act) and the Data Inspectorate (Datatilsynet) are becoming increasingly strict about where sensitive data lives. The "Safe Harbor" framework exists, but do you really want your customer data replicating to a US-based cloud bucket without your knowledge?
Hosting in Norway, specifically in Oslo data centers connected to NIX (Norwegian Internet Exchange), offers two advantages:
- Compliance: Your data stays within Norwegian jurisdiction, simplifying compliance with the Data Protection Directive (95/46/EC).
- Speed: Latency from Oslo to the rest of Norway is typically under 5ms. If you host in Frankfurt or Dublin, you are adding 30-50ms of round-trip time to every packet.
Preparing for 2010: Emerging Technologies
We are also keeping a close eye on Solid State Drives (SSDs). While the Intel X25-E is currently too expensive for mass storage, it is changing the game for database intent logs (ZIL). We are currently testing hybrid setups where hot data lives on flash, while bulk data sits on SAS.
Until SSDs become cost-effective for the enterprise, the best "Cloud Storage" solution isn't a nebulous cloud at all. It's high-speed, redundant, local storage managed by a hypervisor like Xen that guarantees your resources are actually yours.
Don't let I/O wait times kill your application's performance. Stop fighting for scraps on a crowded SAN.
Deploy a RAID-10 backed instance on CoolVDS today and see what 15k SAS drives can actually do.