The Latency Lie in the "Cloud" Revolution
It is April 2010, and you cannot open a tech magazine without reading about "The Cloud." Marketing departments are obsessed with it. They promise infinite scalability and reduced costs. But if you are the one staring at a terminal window watching top while your load average spikes, you know the uncomfortable truth: Network Attached Storage (SAN) often kills database performance.
We are seeing a massive influx of clients migrating away from "big cloud" providers back to Virtual Dedicated Servers (VDS) because their I/O wait times are destroying their user experience. When your storage is located three switches and a fiber cable away from your CPU, latency is inevitable. For a static file server, this is fine. For a high-transaction MySQL database powering a Norwegian e-commerce site, it is a disaster.
The Anatomy of an I/O Bottleneck
Let’s look at a recent scenario. We audited a client running a Magento setup on a popular US-based cloud infrastructure. Their CPU usage was low, yet the site took 6 seconds to load. The culprit? %wa (I/O Wait).
Running vmstat 1 revealed the horror:
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 4 0 245000 15000 450000 0 0 850 900 1020 1500 5 2 0 93 0That 93% wait means the CPU is sitting idle, begging the hard drive to return data. The cloud provider's SAN was congested by other tenants—the classic "noisy neighbor" effect. Because the storage wasn't local to the hypervisor, the latency variance was unpredictable.
Why Hardware RAID 10 is Still King
At CoolVDS, we take a different approach. We don't oversell the "cloud storage" buzzword if it means sacrificing raw speed. Our nodes in Oslo utilize local Hardware RAID 10 with 15k RPM SAS drives. In some high-performance zones, we are even testing the new Intel X25-E SLC SSDs for caching layers, though SAS remains the reliability standard.
By keeping storage on the same physical chassis as the CPU, we eliminate network latency. The result is consistent throughput and, more importantly, consistent IOPS (Input/Output Operations Per Second).
Configuration Tip: optimizing for Virtualization
If you are managing a Linux VPS (running CentOS 5.4 or the brand new Ubuntu 10.04 LTS), the default I/O scheduler is often set to cfq (Completely Fair Queuing). This is designed for physical rotating platters.
Inside a virtual environment, the hypervisor handles the physical disk ordering. Your guest OS should just send requests as fast as possible. Switch your scheduler to deadline or noop to drop latency immediately.
# Check current scheduler
cat /sys/block/sda/queue/scheduler
# Change to noop on the fly
echo noop > /sys/block/sda/queue/schedulerTo make this permanent, add elevator=noop to your kernel line in /boot/grub/menu.lst.
Data Sovereignty: The Norwegian Context
Performance isn't the only concern. The legal landscape regarding data privacy is shifting. With the Data Retention Directive causing headaches across Europe, and the Norwegian Personopplysningsloven (Personal Data Act) setting strict standards, knowing exactly where your data sits is critical.
When you use a generic cloud storage API, your data could be fragmented across servers in Dublin, Frankfurt, or worse, replicated to the US under the Patriot Act jurisdiction. The Norwegian Data Inspectorate (Datatilsynet) has raised valid concerns about this lack of control.
Pro Tip: Hosting your data in a Norwegian datacenter (connected via NIX in Oslo) not only reduces ping times to your local customers to under 2ms, but it also simplifies compliance. You know exactly which physical drive holds your customer data.
Balancing Cost and Reliability
We are not saying SANs are useless. They have their place in massive, enterprise-grade failover clusters (like Oracle RAC). But for 95% of web applications, the complexity adds cost and points of failure without delivering speed.
If you need high availability, manage it at the application layer. Set up MySQL Master-Slave replication between two CoolVDS instances. This gives you redundancy and the read-speed benefits of local disk I/O.
Stop fighting with "Cloud" latency. If your database is choking, it’s time to get back to basics: dedicated resources, local RAID 10, and a connection straight to the Norwegian backbone.
Need to verify your disk speeds? Deploy a CentOS 5 instance on CoolVDS today and run dd yourself. The results speak louder than marketing.