Console Login

Cloud Storage Bottlenecks: Why Your I/O Wait is Killing Application Performance in 2012

The "Cloud" Lie: Why Your VPS Storage is Slower Than a Laptop

Let’s stop pretending that "The Cloud" is magic. For those of us managing high-traffic LAMP stacks or growing Magento installations, the migration from bare metal to virtualized infrastructure over the last two years (2010–2012) has revealed a dirty secret: Disk I/O is the new bottleneck.

I recently audited a client's setup hosted on a popular budget European VPS provider. They were running a standard CentOS 5.8 node. Their CPUs were idle, RAM was plentiful, yet the site took 6 seconds to load. A simple check of top showed the load average at 15.00, driven entirely by I/O wait. They were sharing a spinning disk SAN with 50 other "noisy neighbors."

In 2012, raw compute power is cheap. Fast, reliable storage is where the battle is won or lost. If you are serving customers in Norway, latency matters—not just network latency to NIX (Norwegian Internet Exchange), but disk latency.

The Anatomy of an I/O Bottleneck

When you buy a standard VPS, you are usually getting a slice of a large RAID array connected via network (SAN) or a local disk shared via a hypervisor like OpenVZ. In 2010, this was acceptable. Today, with data-heavy applications, it is suicide.

To diagnose if your storage is choking your application, stop looking at CPU percentage and start looking at the disk queue. Here is a snapshot from a struggling database server I debugged last week using iostat (part of the sysstat package):

[root@db01 ~]# iostat -x 1
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.50    0.00    1.50   45.20    0.00   50.80

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    15.50   25.00   80.00   800.00  3200.00    38.10    12.50  120.50   8.50  89.25

Look at the %iowait (45.20%) and await (120.50 ms). This means the CPU is sitting around twiddling its thumbs for 120 milliseconds every time it asks for data. For a MySQL database, this is catastrophic.

The Scheduler Factor: Tuning CentOS 6

Most distributions default to the CFQ (Completely Fair Queuing) scheduler. While CFQ is great for desktop spinning drives, it is often suboptimal for virtualized environments where the hypervisor handles the physical geometry. In a virtualized environment like KVM (which we use exclusively at CoolVDS), you often want the Guest OS to just pass the request through.

Check your current scheduler:

cat /sys/block/sda/queue/scheduler
[cfq] deadline noop

If you see cfq selected, change it. For virtualized storage, specifically on the high-performance SSD arrays we deploy, the noop or deadline schedulers reduce CPU overhead by assuming the underlying storage controller (or hypervisor) will handle the sorting.

# Apply immediately
echo deadline > /sys/block/sda/queue/scheduler

# Make it permanent in /boot/grub/menu.lst (append to kernel line)
elevator=deadline

Spinning Rust vs. The SSD Revolution

In 2010, SSDs were an expensive luxury, mostly found in high-end consumer laptops or massive enterprise caching tiers (like Fusion-io cards). In 2012, the economics have shifted. We are now seeing Enterprise SSDs (Intel 320/520 series or equivalent SAS SSDs) becoming viable for primary storage.

The difference isn't just sequential throughput; it's IOPS (Input/Output Operations Per Second). A standard 15k RPM SAS drive gives you maybe 180-200 IOPS. A single enterprise SSD can push 20,000+ IOPS.

Pro Tip: If you are running MySQL with InnoDB, ensure your innodb_io_capacity is tuned. On old spinning disks, the default (200) was fine. On our SSD-backed CoolVDS instances, you should bump this significantly to utilize the available throughput.
[mysqld]
# Optimize for SSD storage
innodb_io_capacity = 2000
innodb_flush_neighbors = 0

Data Sovereignty and The "Cloud Act" Fears

Performance isn't the only concern. The legal landscape regarding data privacy is tightening. We have the EU Data Protection Directive (95/46/EC), and here in Norway, the Personopplysningsloven enforced by Datatilsynet is strict.

Many US-based "clouds" cannot guarantee where your data physically resides. It might be in Dublin today and replicated to Virginia tomorrow. For Norwegian businesses, specifically those in healthcare or finance, this is unacceptable risk. Hosting on a VPS in Norway ensures low latency—typically sub-5ms from Oslo—and compliance with local jurisdiction.

Filesystem Choice: ext4 vs. XFS

With Red Hat Enterprise Linux 6 (and CentOS 6), ext4 became the default. It's robust. However, if you are dealing with massive concurrency or large files, XFS remains a strong contender, though ext4 has largely caught up in stability.

For most web workloads, ext4 is fine, but mount options matter. Updating the access time (atime) on every read is a waste of I/O.

Edit your /etc/fstab to include noatime:

/dev/vda1   /   ext4    defaults,noatime,barrier=0   1 1

Note: Disabling barriers (barrier=0) improves performance significantly but carries a risk of data loss during power outages. At CoolVDS, our facility has redundant UPS and diesel generators, but you should weigh the risk for your specific application.

Why KVM is the Future (and OpenVZ is the Past)

Back in 2008-2010, OpenVZ was popular because it was "lightweight." It uses a shared kernel. This means if one user crashes the kernel, everyone goes down. It also means resources like RAM and Disk I/O are soft-limited, not hard-isolated.

This is why CoolVDS utilizes KVM (Kernel-based Virtual Machine). KVM allows us to allocate a dedicated slice of hardware. Your RAM is yours. Your Disk I/O is protected. You can even run your own custom kernel if you need specific modules for advanced routing or VPN setups.

Feature OpenVZ (Legacy) KVM (CoolVDS Standard)
Kernel Shared Isolated / Custom
Resource Isolation Poor (Burstable) Strong (Dedicated)
Performance Stability Fluctuates wildly Consistent
OS Variety Linux only Linux, BSD, Windows

Conclusion: Don't let 2010 Technology Slow You Down

The web is getting heavier. Pages are larger. Databases are growing. Relying on shared spinning disks or oversold cloud storage is a recipe for downtime. You need architecture that respects physics.

By combining KVM virtualization, scheduler tuning, and local SSD arrays, you can achieve bare-metal performance at a fraction of the cost. Whether you are running a high-traffic forum or a critical corporate mail server, latency is the enemy.

Ready to see the difference real hardware makes? Stop guessing with `iostat` and deploy a KVM instance on CoolVDS today. Experience low latency and true data sovereignty in our Oslo datacenter.