Console Login

Stop Letting I/O Wait Kill Your App: A DevOps Guide to High-Performance Storage in 2012

Why "The Cloud" is Killing Your Database Performance (And How to Fix It)

It is 3:00 AM. Your pager is screaming. The load average on your primary database node just crossed 20.0, but when you SSH in and run top, the CPU usage is barely scratching 15%. You stare at the screen, confused. Then you see it: %wa (iowait) is hovering at 85%.

Welcome to the dirty secret of 2012's cloud hosting market: Storage contention.

While everyone is talking about "The Cloud" as the ultimate solution for scalability, most providers are selling you a slice of a heavily oversubscribed SAN (Storage Area Network) connected via Gigabit Ethernet. They promise "unlimited" space, but they rarely talk about IOPS (Input/Output Operations Per Second). When your neighbor decides to run a massive backup or a dd command, your database latency spikes from 2ms to 200ms. In the world of high-traffic e-commerce, that is not a glitch; that is downtime.

The HDD vs. SSD Reality Check

Let's be pragmatic. Spinning rust (7200 RPM SATA drives) can push maybe 75-100 IOPS. Even with short-stroking and 15k SAS drives in a RAID 10 array, you are physically limited by the movement of the read/write head. If you are hosting a static brochure site, fine. But if you are running Magento, Drupal, or a custom application with heavy MySQL writes, mechanical drives are your bottleneck.

This is where Solid State Drives (SSDs) are changing the landscape. We are seeing random read speeds jumping from 0.5 MB/s on HDDs to 250+ MB/s on enterprise SSDs. The seek time is effectively zero.

Pro Tip: Never trust a provider who says "Cloud Storage" without specifying the backend. If they can't tell you if it is local RAID-10 or a centralized SAN, assume it is a choked NFS mount. For raw performance, Local Storage always beats Network Storage.

Tuning Linux for SSD Performance

Simply paying for an SSD VPS isn't enough. You need to tell the Linux kernel that you are not running on a spinning disk. The default I/O scheduler in CentOS 6 is usually cfq (Completely Fair Queuing), which is optimized for minimizing head seek time on rotating platters. On an SSD, this logic just adds overhead.

1. Switch the Scheduler

Check your current scheduler:

cat /sys/block/vda/queue/scheduler # Output: [cfq] deadline noop

For a virtualized guest on flash storage, you want noop (First In, First Out) or deadline. The hypervisor should handle the complexity; your VM just needs to pass the requests as fast as possible.

Change it instantly:

echo noop > /sys/block/vda/queue/scheduler

To make it permanent, edit /boot/grub/menu.lst and append elevator=noop to your kernel line.

2. Optimize Filesystem Mounts

Every time you read a file, Linux updates the "access time" (atime). On a high-traffic web server, this causes a write operation for every read. Disable it.

# Edit /etc/fstab /dev/mapper/vg_main-lv_root / ext4 defaults,noatime,nodiratime 1 1

Database Tuning: The Buffer Pool

Hardware is only half the battle. If you are using MySQL 5.5 (which you should be, over 5.1), you need to ensure InnoDB is utilizing that RAM to avoid hitting the disk unnecessarily. However, when you do hit the disk, you want it to be fast.

Check your InnoDB buffer pool usage:

SHOW STATUS LIKE 'Innodb_buffer_pool_%';
-- Look at Innodb_buffer_pool_reads vs Innodb_buffer_pool_read_requests
-- If reads is high, you are hitting the disk.

In your /etc/my.cnf, ensure you are using O_DIRECT to bypass the OS cache if you have fast SSDs, preventing double caching:

[mysqld]
innodb_buffer_pool_size = 2G  # Adjust to 70% of available RAM
innodb_flush_method = O_DIRECT
innodb_io_capacity = 1000     # Default is 200, crank this up for SSDs!

The Virtualization War: OpenVZ vs. KVM

This is where many developers get burned. OpenVZ is container-based virtualization (sharing the host kernel). It is efficient, but it often suffers from "bean counting" resource limits. If the host kernel is under load, your I/O suffers immediately.

KVM (Kernel-based Virtual Machine), which is the standard at CoolVDS, offers full hardware virtualization. You get your own kernel. You get better isolation. When we allocate an SSD slice to a KVM instance, the noise from neighbors is significantly dampened compared to container solutions.

Feature OpenVZ / Containers CoolVDS KVM
Kernel Shared Dedicated
Swap Often Fake/Burst Real Partition
I/O Isolation Poor High

Norwegian Data Integrity & Latency

Why host in Norway? Aside from the obvious benefit of cheap, green hydropower keeping costs down, there is the latency factor. If your user base is in Scandinavia, routing traffic through Frankfurt or London adds 20-30ms of round-trip time (RTT). Routing via NIX (Norwegian Internet Exchange) in Oslo keeps that RTT under 5ms.

Furthermore, we must respect the Personopplysningsloven (Personal Data Act). Keeping data within national borders simplifies compliance with the Data Inspectorate (Datatilsynet). Relying on US-based Safe Harbor agreements is becoming increasingly scrutinized by legal experts in the EU. Hosting locally removes that ambiguity.

The CoolVDS Approach

We don't oversell. We don't put 500 customers on a single SATA array. We built our infrastructure on RAID-10 SSDs using KVM virtualization. While the industry is slowly waking up to PCIe Flash and emerging protocols like NVMe storage (which are currently bleeding edge enterprise tech), we are already deploying high-performance solid-state arrays that saturate SATA 6Gbps links.

Stop fighting with iowait. If your server is melting down every time a backup runs, it’s not your code—it’s your host.

Ready to see what 0% iowait looks like? Deploy a high-performance KVM instance on CoolVDS today and get your latency down to where it belongs.