Console Login

Cloud Storage in 2012: Why Spinning Rust and Network Latency Are Killing Your App Performance

Cloud Storage in 2012: Why Spinning Rust and Network Latency Are Killing Your App Performance

Let’s cut through the marketing fluff. Everyone is talking about "The Cloud" right now like it’s magic. But if you’ve ever stared at a terminal watching top while your load average climbs to 20.0 because of 95% iowait, you know the truth: The Cloud is just someone else's computer, and usually, that computer has slow hard drives.

I recently inherited a project for a client in Oslo—a high-traffic Magento setup running on a budget "cloud" provider. They promised "unlimited storage." What they didn't mention was that the storage was a networked SAN running on 7.2k RPM SATA drives shared by 500 other noisy neighbors. The moment a marketing campaign hit, the database seized up. The CPU wasn't the bottleneck; the disk head seek time was.

In 2012, relying on standard spinning platters (HDD) for your primary database storage is professional suicide. If you are building for the Nordic market, you need to understand IOPS, latency, and the physical reality of where your data lives.

The I/O Bottleneck: Local SSD vs. Networked SAN

Most large providers abstract storage away. They give you an iSCSI mount or an NFS share and call it a "disk." But network storage introduces latency. Even within a datacenter, a few milliseconds of network lag added to the seek time of a mechanical drive results in sluggish page loads.

For high-performance applications, Local Storage is king. Specifically, Solid State Drives (SSDs). While enterprise SSDs are still expensive compared to HDDs, the IOPS (Input/Output Operations Per Second) difference is logarithmic.

  • 7.2k SATA HDD: ~80-100 IOPS
  • 15k SAS HDD: ~180-200 IOPS
  • Enterprise SSD (SATA/PCIe): 10,000+ IOPS

If you are running MySQL or PostgreSQL, you aren't paying for gigabytes; you are paying for IOPS. A 20GB database on an SSD will outperform a 1TB database on a SAN every single time.

Identifying the Problem: The Sysadmin's Toolkit

Before you blame your PHP code, check your disk stats. On a Linux box (CentOS 6 or Ubuntu 12.04), `iostat` is your best friend. If you don't have it, install `sysstat`.

apt-get install sysstat # Ubuntu/Debian yum install sysstat # CentOS/RHEL

Run this command to see what's actually happening:

watch -n 1 iostat -x 1

Pay attention to the %util and await columns.

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           14.50    0.00    3.20   45.30    0.00   37.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    12.00   45.00   80.00  3200.00  9500.00    98.40     2.50   25.00   6.50  92.50

If %iowait is high (above 20-30%) and your %util is nearing 100%, your disk is the bottleneck. The CPU is literally sitting idle, waiting for the hard drive to write data.

Optimizing Storage Performance

If you can't migrate to an SSD-backed VPS immediately, you must optimize what you have. Here are three configurations we use at CoolVDS to squeeze performance out of standard Linux setups.

1. Filesystem Tuning (noatime)

By default, Linux writes a timestamp to the disk every time a file is read. This doubles the I/O load for reads. Disable it in /etc/fstab.

# Open fstab
vi /etc/fstab

# Add 'noatime' and 'nodiratime' to your primary partition
/dev/sda1   /   ext4    errors=remount-ro,noatime,nodiratime   0   1

Remount the drive dynamically to apply without rebooting:

mount -o remount,noatime /

2. MySQL InnoDB Tuning

In MySQL 5.5 (which you should be using over 5.1), the `innodb_io_capacity` setting tells the database how fast your disks are. The default is low (200), aimed at slow HDDs.

[mysqld]
# For 7.2k RPM drives
innodb_io_capacity = 200

# For 15k RPM SAS / RAID 10
innodb_io_capacity = 500

# For SSDs (The CoolVDS Standard)
innodb_io_capacity = 2000

3. The Scheduler

The Linux kernel I/O scheduler determines the order in which disk requests are processed. `CFQ` is standard, but for virtualized environments or SSDs, `Deadline` or `Noop` is often faster because the hypervisor handles the physical sorting.

echo noop > /sys/block/sda/queue/scheduler
Pro Tip: Never run a database on a filesystem formatted with small block sizes. Ensure your RAID stripe size matches your filesystem alignment to avoid "write amplification."

The Legal & Latency Angle: Why Norway Matters

Beyond raw speed, there's location. If your customers are in Oslo, Bergen, or Trondheim, hosting in a US datacenter (like AWS East) adds ~100ms of latency just for the packet round trip. That’s before the server even processes the request.

Furthermore, with the Data Protection Directive (95/46/EC) and the Norwegian Personal Data Act (Personopplysningsloven), you have a legal responsibility to protect user data. Hosting sensitive data inside Norway ensures you are dealing with Datatilsynet and Norwegian law, not the US Patriot Act.

The CoolVDS Approach: Enterprise Hardware, No Gimmicks

At CoolVDS, we don't believe in the "noisy neighbor" effect. We use KVM (Kernel-based Virtual Machine) for strict isolation. Unlike OpenVZ, where a neighbor can hog the kernel's resources, KVM gives you dedicated interrupt handling.

We deploy purely on RAID-10 SSD arrays or high-end SAS clusters with heavy caching controllers. While others are selling you "cloud storage" that sits across the ocean, we provide low-latency block storage directly connected to the hypervisor in our Oslo datacenter. We are even testing early implementations of PCIe Flash technology (often called NVMe storage in technical circles) to future-proof our infrastructure.

Comparison: Typical VPS vs. CoolVDS

Feature Budget Cloud CoolVDS Norway
Storage Technology Networked SATA (Slow) Local RAID-10 SSD
Virtualization OpenVZ (Oversold) KVM (Isolated)
Latency to NIX 30ms - 150ms < 2ms
IOPS Limit Often capped at 100 Unthrottled Burst

Don't let slow storage be the reason your application fails under load. If you are serious about performance, stop sharing a spinning hard drive with 500 other people.

Ready to see the difference? Deploy a test instance on CoolVDS today and run your own benchmarks.