Cloud Storage Myths: Why Your "Cloud" Database is Crawling (And How to Fix It)
Everyone is screaming "Move to the Cloud" right now. Your manager reads about it in Wired, your marketing team loves the flexibility, and Amazon EC2 is the topic of every developer conference. But there is a dirty little secret that most cloud providers won't tell you while they are selling you on "infinite scalability."
Latency.
I spent the last week debugging a Magento 1.4 deployment that was bringing a Quad-Core server to its knees. The CPU wasn't the problem—load average was high, but CPU usage was low. The culprit? iowait. The server was spending 40% of its life waiting for the disk to write back. This is the reality of the "Cloud" in 2011: often, it's just a virtual machine attached to a choked Storage Area Network (SAN) fighting for bandwidth with a thousand other noisy neighbors.
The SAN Trap vs. Local RAID
Most VPS providers architecture their systems using centralized SAN storage. It makes their life easier; if a host node fails, they can boot your VM elsewhere. But for you, the sysadmin, it introduces network latency to every single disk write. When you are running a high-transaction MySQL database, those milliseconds stack up fast.
If you run iostat -x 1 on a standard cloud instance during peak hours, you might see this nightmare:
avg-cpu: %user %nice %system %iowait %steal %idle
5.00 0.00 2.50 45.50 0.00 47.00
Device: rrqm/s wrqm/s r/s w/s svctm %util
sda 0.00 15.00 5.00 40.00 20.00 90.00
That %iowait is your database dying. That svctm (service time) of 20ms is unacceptable for a production web server.
The Solution: Local RAID-10 and The Rise of SSD
The only way to guarantee performance for disk-heavy applications is dedicated local storage or the emerging Solid State Drive (SSD) technology. While Enterprise SSDs (like the Intel X25 series) are still expensive, the IOPS (Input/Output Operations Per Second) they deliver are game-changing—jumping from 180 IOPS on a 15k SAS drive to over 3,000 IOPS on an SSD.
At CoolVDS, we realized early on that network storage is a bottleneck. That is why our Norwegian infrastructure relies on local RAID-10 arrays. We don't force your disk writes to travel over the network before they hit the platter (or the flash memory).
Privacy in 2011: The Patriot Act Factor
Beyond raw speed, there is the issue of where your bits actually live. With the US Patriot Act in full force, Norwegian companies are rightly nervous about hosting sensitive customer data on US-owned clouds (like AWS or Rackspace), even if they have "European" data centers. The legal jurisdiction is murky.
Under the Norwegian Personal Data Act (Personopplysningsloven) and the oversight of Datatilsynet, you have specific obligations regarding customer data. Hosting your data inside Norway, on Norwegian-owned infrastructure, isn't just about lower latency to the NIX (Norwegian Internet Exchange) in Oslo—though ping times of 2ms are nice. It is about legal certainty. Your data remains under Norwegian law, safe from foreign subpoenas.
Sysadmin Toolkit: Optimizing for I/O
Regardless of where you host, you can tune your Linux box to handle I/O better. Here are two changes I make on every CentOS 5 or Debian Lenny server I touch:
1. The noatime Mount Flag
By default, Linux writes to the disk every time you just read a file to update its "access time." For a web server reading thousands of PHP files, this is useless overhead.
Edit your /etc/fstab:
/dev/sda1 / ext3 defaults,noatime 1 1
2. MySQL InnoDB Tuning
If you can tolerate losing 1 second of data in a catastrophic crash (and you have a battery-backed RAID controller), change this setting in your my.cnf. It stops MySQL from flushing to disk after every single transaction.
innodb_flush_log_at_trx_commit = 2
Pro Tip: If you are moving to the new Ext4 file system (available in the new RHEL 6 / CentOS 6), be careful with barriers. While barrier=0 improves performance, it risks data corruption during power loss. On CoolVDS, our battery-backed hardware RAID handles this safety, allowing you to safely squeeze out more speed.
The Verdict
Don't be blinded by the "Cloud" marketing hype. If your application relies on a database, look at the storage architecture first. A generic cloud instance with network storage will throttle your growth.
If you need raw throughput, low latency to Oslo, and strict adherence to Norwegian privacy standards, you need infrastructure built for professionals, not just hobbyists. Don't let slow I/O kill your SEO rankings.
Ready to see the difference local RAID-10 makes? Deploy a high-performance instance on CoolVDS in 55 seconds.