Cloud Storage Strategies for 2011: Performance, Compliance, and Reality
If I hear one more vendor pitch me a "revolutionary cloud storage solution" that turns out to be just a slow NFS mount over a public network, I might just decommission my own workstation. It is 2011. The hype cycle for "Cloud" is deafening, but for those of us responsible for actual uptime and transaction speeds in Oslo and Stavanger, physics still applies.
Latency is the enemy. While the US market is rushing toward massive, centralized object storage, the pragmatic move for Norwegian enterprises isn't always to dump everything into a bucket across the Atlantic. It's about understanding the I/O path. Whether you are running a MySQL cluster or a heavy Magento installation, the distance between your CPU and your disk platters defines your user experience.
The SAN Delusion vs. Local RAID Reality
Many hosting providers are currently pushing Storage Area Networks (SAN) as the ultimate solution for scalability. On paper, it looks great: decouple compute from storage, migrate VMs instantly. In practice? You are introducing network overhead to every single read/write operation.
I recently audited a setup for a client in Bergen experiencing 5-second load times. Their provider had their VMs on a saturated SAN. The iowait was consistently above 40%. The solution wasn't more RAM; it was moving to local storage.
At CoolVDS, we have taken a different stance. We champion Local RAID 10. By stripping and mirroring disks directly on the hypervisor chassis, we eliminate the network hop entirely. Yes, live migration is harder. But do you want a VM that moves easily, or a VM that performs?
| Feature | Network Storage (SAN/NAS) | CoolVDS Local RAID 10 |
|---|---|---|
| Latency | Variable (Network Dependent) | Microseconds (Bus Speed) |
| Throughput | Limited by 1Gbps/10Gbps Link | Limited by Disk Controller (6Gbps SAS) |
| Reliability | Single Point of Failure (Switch) | Redundant Disks (4+ drives) |
The Rise of SSD: A Warning
Solid State Drives are beginning to mature. We are seeing early adoption of Intel X25-E drives in enterprise environments. However, a word of caution: generic consumer SSDs degrade rapidly under server write loads due to write amplification. If you are deploying SSDs in 2011, ensure you are using SLC (Single-Level Cell) flash, or you will lose data.
For most database workloads today, high-RPM SAS drives in RAID 10 typically offer the best balance of cost and reliability, though we are aggressively testing enterprise SSDs for our upcoming high-performance tier.
Optimizing Linux I/O for Virtualization
Regardless of your hardware, your OS configuration matters. With the recent release of RHEL 6 and the maturity of the KVM hypervisor (which we use extensively at CoolVDS over the aging Xen PV), the I/O scheduler is critical. The default CFQ scheduler is often suboptimal for virtualized guests.
I recommend switching your database servers to the deadline or noop scheduler to reduce CPU overhead on disk operations. Here is how you do it on CentOS 5/6:
# Check current scheduler
cat /sys/block/sda/queue/scheduler
# Change to deadline (temporary)
echo deadline > /sys/block/sda/queue/scheduler
# Make it persistent in /boot/grub/grub.conf
kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/LogVol00 elevator=deadline
Pro Tip: Don't forgetnoatime. Mounting your filesystems with thenoatimeflag prevents the server from writing metadata every time a file is read. It’s a 10-second fix that can reduce disk writes by 20%.
The "Patriot Act" and Norwegian Data Sovereignty
We cannot discuss cloud storage without addressing the elephant in the room: The US Patriot Act. If your data is hosted with a US-based cloud giant, it is subject to US jurisdiction, regardless of where the server physically sits. For Norwegian businesses handling sensitive customer information, this poses a conflict with the Personal Data Act (Personopplysningsloven) and the directives from Datatilsynet.
This is where the geography of your VPS matters. Hosting on CoolVDS means your data resides physically in Norway, under Norwegian law. We peer directly at NIX (Norwegian Internet Exchange) in Oslo. This isn't just about compliance; it's about physics. Routing traffic from Oslo to a datacenter in Virginia and back adds 80-100ms of latency. Routing it to our facility in Oslo adds 2ms.
When you are building a system for 2011 and beyond, look past the "Cloud" marketing brochures. Look at the disk controller. Look at the jurisdiction. Look at the latency.
If you need raw I/O performance without the noisy neighbor issues of shared SANs, it is time to evaluate your infrastructure. Deploy a KVM instance on CoolVDS today and see what local RAID 10 actually feels like.