Console Login
Home / Blog / DevOps & Infrastructure / Cloud Storage in 2011: Why Latency and Shared SANs Are Killing Your Web App
DevOps & Infrastructure 9 views

Cloud Storage in 2011: Why Latency and Shared SANs Are Killing Your Web App

@

The "Cloud" Lie We Are All Buying

It is 2011. Everywhere you look, marketing teams are slapping the word "Cloud" on everything from shared hosting to glorified FTP servers. But as systems administrators and developers, we need to look past the shiny brochures and look at the raw metrics. The reality of Cloud Storage in 2010 and early 2011 has been a mixed bag of promise and pain.

I have spent the last six months migrating a high-traffic e-commerce platform from a traditional dedicated cluster to a "scalable cloud" solution. The result? Our CPU usage dropped, but our page load times increased. Why? I/O Latency.

Most providers building cloud infrastructure today are relying on massive, centralized SAN (Storage Area Network) arrays. They tell you it is for "redundancy" and "live migration." What they don't tell you is that when fifty other noisy neighbors decide to run backups at 2:00 AM, your database write speeds plummet. If you are serving customers in Oslo or Stavanger, that added latency on top of the network hops is unacceptable.

The Bottleneck: Spinning Rust vs. The New Solid State

Let's get technical. If you run iostat -x 1 on a standard VPS today, you will often see your %util spike while await times creep up to 200ms or more. This is the death knell for a MySQL database.

In 2011, we are at a tipping point. Standard 7.2k RPM SATA drives in a RAID 5/6 array are the industry standard for cheap storage, but they cannot handle the random I/O patterns of a busy web server. SAS 15k drives are better, but expensive.

The real game-changer is the emerging Solid State Drive (SSD) technology. While still pricey, the IOPS (Input/Output Operations Per Second) difference is logarithmic. A standard hard drive gives you maybe 100-150 IOPS. An enterprise SSD? We are seeing numbers in the thousands. This isn't just "faster"; it changes how we architect applications.

Pro Tip: Tuning the I/O Scheduler
If you are lucky enough to be on an SSD-backed instance or a high-performance VPS, the default Linux scheduler (CFQ) might actually slow you down. Switch to 'noop' or 'deadline' to reduce CPU overhead.

echo noop > /sys/block/sda/queue/scheduler

Add this to your /etc/rc.local to make it permanent. I've seen this drop latency by 10-15% on virtualized guests.

Virtualization: OpenVZ vs. KVM

Another major factor in 2011 is the virtualization layer. Many budget hosts use OpenVZ. It’s container-based (sharing the host kernel), which is efficient but suffers heavily from "noisy neighbor" syndrome. If another user on the node gets DDoS'd or compiles a massive kernel, your performance tanks.

At CoolVDS, we have standardized on KVM (Kernel-based Virtual Machine). With KVM, you get a dedicated kernel and strict resource isolation. It behaves more like a real server. When we provision a slice for you, the RAM and CPU cycles are reserved. Combined with local RAID 10 storage (stripping for speed, mirroring for redundancy), you avoid the network latency of a SAN entirely.

Data Sovereignty: The Norwegian Context

We cannot ignore the legal side. With the implementation of the Personopplysningsloven (Personal Data Act) and the watchful eye of Datatilsynet, where you store your data matters. Hosting on a US-based cloud (relying on Safe Harbor) might be legally convenient for some, but it introduces latency and potential privacy concerns.

For a Norwegian business, keeping data within the borders—or at least within the EEA—is not just about compliance; it's about physics. Pinging a server in Ashburn, Virginia from Oslo takes ~100ms. Pinging a server in Oslo takes ~5ms. For a chatty application or a database query loop, that difference destroys user experience.

Optimizing Your Stack for 2011

If you are stuck on legacy infrastructure, here are a few immediate fixes you can apply to mitigate slow storage:

  • Filesystem: Ensure you are using ext4. It is significantly faster than ext3 for large file manipulations and fsck times.
  • Mount Options: Edit your /etc/fstab and add the noatime flag to your root partition. This stops the server from writing metadata every time you just read a file.
  • Database: If you use MySQL 5.1 or 5.5, check your innodb_buffer_pool_size. It should be 70-80% of your available RAM so you hit the disk as little as possible.

The Verdict

Cloud storage is the future, but in 2011, the implementation varies wildly. Don't settle for oversold OpenVZ containers on slow SATA drives. Your application deserves the response times of local RAID 10 and the stability of KVM.

If you are ready to stop fighting with I/O wait and start shipping code, deploy a test instance on CoolVDS. We are one of the few providers in the Nordics aggressively rolling out SSD caching and pure KVM environments. Speed matters.

/// TAGS

/// RELATED POSTS

Building a CI/CD Pipeline on CoolVDS

Step-by-step guide to setting up a modern CI/CD pipeline using Firecracker MicroVMs....

Read More →

Taming the Beast: Kubernetes Networking Deep Dive (Pre-v1.0 Edition)

Google's Kubernetes is changing how we orchestrate Docker containers, but the networking model is a ...

Read More →

Stop SSH-ing into Production: Building a Git-Centric Deployment Pipeline

Manual FTP uploads and hot-patching config files are killing your stability. Here is how to implemen...

Read More →

Decomposing the Monolith: Practical Microservices Patterns for Nordic Ops

Moving from monolithic architectures to microservices introduces network complexity and latency chal...

Read More →

Beyond the Hype: Building "NoOps" Microservices Infrastructure in Norway

While Silicon Valley buzzes about AWS Lambda, pragmatic engineers know the truth: latency and vendor...

Read More →

Ditch Nagios: Monitoring Docker Microservices with Prometheus in 2015

Monolithic monitoring tools like Nagios fail in dynamic Docker environments. Learn how to deploy Pro...

Read More →
← Back to All Posts