Console Login
Home / Blog / Cloud Computing / Cloud Storage in 2010: Why Disk I/O Latency is the Real Killer for Norwegian Enterprises
Cloud Computing 10 views

Cloud Storage in 2010: Why Disk I/O Latency is the Real Killer for Norwegian Enterprises

@

The "Cloud" Buzzword vs. The Reality of Spinning Rust

It is 2010, and the marketing departments are in a frenzy about "Cloud Computing." Amazon S3 is maturing, and everyone wants to dump their infrastructure into an abstract nebulous void. But as a Systems Architect responsible for high-traffic platforms in Scandinavia, I look at the metrics, not the brochures. The reality is brutal: CPU is cheap, but Disk I/O is expensive.

In a shared hosting environment or a poorly managed VPS, your neighbor's backup script can bring your MySQL master to its knees. I have debugged enough load spikes to know that when the load average hits 20.0, but CPU usage is only 5%, you are suffering from iowait. Your processor is bored; it is waiting for a hard drive head to move across a platter.

The War Story: When Network Storage Fails

Last month, we migrated a high-volume Magento installation for a client based in Trondheim. They were hosted on a "Cloud" provider in Germany using network-attached storage (SAN). The latency was killing their checkout process. Every session write involved a network round-trip to the storage array.

We ran iostat -x 1 during peak traffic. The results were terrifying:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           14.50    0.00    2.50   48.00    0.00   35.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    12.00    5.00   45.00    40.00   450.00     9.80     4.50   120.00   8.00  40.00

Look at that await time: 120ms. For a transactional database, that is an eternity. The disk wasn't failing; the SAN congestion and network latency were throttling the application.

Local RAID-10: The Only Sane Choice for DBs

To fix this, we moved the workload to a CoolVDS instance here in Oslo. Why? Because we utilize local storage with hardware RAID-10. We aren't routing block requests over a congested Ethernet cable. We are hitting the disk controller directly.

We also swapped the filesystem from ext3 to ext4, which was marked stable in kernel 2.6.28. The extent support in ext4 handles large files significantly better, reducing fragmentation delay.

Pro Tip: If you are running MySQL on Linux, check your scheduler. The default CFQ (Completely Fair Queuing) is often suboptimal for database workloads on virtualized hardware. Switch to the Deadline or NOOP scheduler for a quick win:
echo deadline > /sys/block/sda/queue/scheduler

The Norwegian Data Advantage: Speed and Sovereignty

Latency is governed by the speed of light. Hosting in Frankfurt or Amsterdam when your customers are in Bergen adds 20-40ms to every packet round trip. In a complex PHP application that makes 50 serial database calls per page load, that latency compounds into seconds of delay.

Furthermore, we must address the regulatory elephant in the room: Datatilsynet. With the growing scrutiny on data privacy and the limitations of the Safe Harbor framework, keeping Norwegian user data within Norway's borders is not just a technical preference—it is becoming a compliance necessity under the Personal Data Act (Personopplysningsloven).

Hardware Reality Check

While SSDs (Solid State Drives) are entering the consumer market, they remain prohibitively expensive for mass storage hosting. However, the industry is shifting. At CoolVDS, we are already integrating Enterprise SSDs for caching layers and high-performance tiers. But for raw capacity, 15k RPM SAS drives in a RAID-10 configuration remain the gold standard for reliability and write speed.

Avoid the "unlimited storage" traps sold by overselling hosts. If you need consistent throughput, you need dedicated spindles or advanced virtualization isolation like KVM, which prevents the "noisy neighbor" effect better than older container technologies like Virtuozzo.

Final Config Check

Before you deploy your next project, stop optimizing your PHP code for a moment and look at your storage subsystem. If you are seeing high %iowait, no amount of Varnish caching will save your write-heavy backend.

Checklist for 2010 Deployments:

  • Filesystem: Upgrade to ext4.
  • RAID: Ensure your host uses RAID-10, not RAID-5 (the write penalty on RAID-5 is too high for databases).
  • Location: Host in Oslo if your users are Norwegian.
  • Virtualization: Demand KVM or Xen.

If you are tired of waiting for remote disks to spin, test a CoolVDS instance. We offer the lowest latency in Norway and storage that actually keeps up with your queries.

/// TAGS

/// RELATED POSTS

Firecracker MicroVMs: The Future of Serverless

Discover why Firecracker MicroVMs are changing the serverless landscape with sub-second boot times....

Read More →

5 Ways to Optimize Your Cloud Costs

Practical tips to reduce your cloud infrastructure costs without sacrificing performance....

Read More →

Cloud Storage vs. Iron: A CTO’s Guide to Data Sovereignty and I/O Performance in Norway (2011 Edition)

Is the cloud hype worth the latency? We analyze the storage landscape of 2011, from Amazon S3 object...

Read More →

Cloud Storage in 2011: Why I/O Wait Is Killing Your Application (And How to Fix It)

Is your server crawling despite low CPU usage? The bottleneck isn't your code—it's likely your sto...

Read More →

Cloud Storage in 2011: Why Local RAID 10 Beats SAN for High-Performance Architectures

While the industry buzzes about 'The Cloud,' pragmatic CTOs know that network storage introduces lat...

Read More →

The I/O Bottleneck: Why Standard Cloud Storage Kills Database Performance (And How to Fix It)

It is 2010, and the "Cloud" buzzword is everywhere. But for high-traffic databases, network storage ...

Read More →
← Back to All Posts