Console Login
Home / Blog / Hosting Technology / Cloud Storage in 2010: Why Latency and Spindles Still Rule the Datacenter
Hosting Technology 0 views

Cloud Storage in 2010: Why Latency and Spindles Still Rule the Datacenter

@

Surviving the I/O Bottleneck: Why Your "Cloud" Strategy Needs Local Storage

It is May 2009. Everyone is talking about "The Cloud." Amazon is pushing S3 and EC2, and managers are suddenly asking why we still own hardware. But let’s be honest: for those of us staring at iostat terminals at 3:00 AM, the cloud abstraction leaks. Badly.

Most virtualization providers hide the ugly truth: Disk I/O is the single biggest performance killer for database-driven applications. You can throw RAM at the problem, but eventually, MySQL has to write to disk. If your "cloud" storage is sitting across a congested SAN (Storage Area Network), your fancy application will crawl.

The Latency Lie: SAN vs. Local Storage

In the rush to abstract storage, many providers are moving customer data to centralized NAS/SAN arrays. They sell you on "infinite scalability." They forget to mention the latency penalty.

I recently audited a Magento installation for a client migrating from a dedicated box to a major US cloud provider. Their page load times jumped from 1.2s to 3.5s. The CPU was idle. The RAM was free. The bottleneck? Wait I/O.

Every read/write operation had to traverse the network layer to reach the storage array. In a high-concurrency environment, milliseconds add up to seconds.

The CoolVDS Architecture: Local RAID-10

At CoolVDS, we reject the centralized SAN model for primary storage. We believe in physics. Data needs to be close to the CPU.

Architect's Note: We deploy RAID-10 SAS (15k RPM) arrays locally on the host node. For premium tiers, we are beginning to roll out Enterprise SSDs (Intel X25-E class). The difference in IOPS is not 2x; it is 100x.

Tuning Linux for High I/O Throughput

Whether you are on a VPS or a dedicated server, default Linux distributions (CentOS 5, Debian Lenny) are often tuned for compatibility, not speed. If you are running a database on a virtualized file system, you need to reduce overhead.

Here are the configurations I apply to every fresh node before traffic hits.

1. Kill the Access Time Writes

By default, Linux writes to the disk every time a file is read, just to update the access time metadata. For a web server reading thousands of PHP and image files, this is death by a thousand cuts.

Edit your /etc/fstab and add the noatime flag:

/dev/sda1 / ext3 defaults,noatime,nodiratime 1 1

2. The I/O Scheduler

The default scheduler in the 2.6 kernel is often CFQ (Completely Fair Queuing). It tries to be fair, but database servers don't want fairness; they want throughput. For virtualized guests (Xen or KVM), the deadline or noop scheduler often performs better because the hypervisor handles the physical disk sorting.

Check your current scheduler:

cat /sys/block/sda/queue/scheduler [cfq] deadline noop

Change it on the fly (benchmark this before making it permanent!):

echo deadline > /sys/block/sda/queue/scheduler

Data Sovereignty: The Norwegian Advantage

Performance isn't just about speed; it's about stability and legal safety. With the recent debates surrounding the US Patriot Act and the scope of data surveillance, hosting critical business data within the EEA (European Economic Area) is becoming a requirement for serious CIOs.

Norway offers a unique stronghold. Under the Personal Data Act (Personopplysningsloven) and the oversight of Datatilsynet, your data has legal protections that US-based clouds cannot guarantee. Furthermore, by hosting in Oslo, you are peering directly at NIX (Norwegian Internet Exchange).

Latency from Oslo (Ping Test)

Destination Time (ms)
Oslo (Local) < 2 ms
Stockholm ~ 8 ms
London ~ 25 ms
New York ~ 90 ms

If your customer base is Scandinavian, hosting in Virginia or Dublin introduces latency you cannot optimize away.

Looking to 2010

As we approach 2010, the divide between "cheap VPS" and "Enterprise Infrastructure" will widen. The market is flooding with oversold OpenVZ containers that crumble under load.

If you care about TCO and uptime, stop sharing your disk IOPS with 500 other noisy neighbors on a slow SAN. Move to a provider that guarantees resources.

Ready to test real metal performance? Spin up a CoolVDS instance with local RAID-10 storage today. We don't cap your I/O, and we don't route your data through a foreign jurisdiction.

/// TAGS

/// RELATED POSTS

Unlocking Potential: The Strategic Benefits of VDS Hosting for Norwegian Businesses in 2015

As the Norwegian digital landscape evolves in 2015, VDS hosting emerges as the superior choice for b...

Read More →

MySQL vs PostgreSQL in 2009: Surviving the Oracle Acquisition

With Oracle's recent move to acquire Sun, the database landscape is shifting. We break down the tech...

Read More →

Cost-Effective Hosting Solutions for Norwegian Startups: Why VDS is the Smart Choice in 2009

In the wake of the financial crunch, Norwegian startups are turning to Virtual Dedicated Servers (VD...

Read More →

Linux vs. Windows Server 2008: The Definitive Platform War for Norwegian SysAdmins

It is March 2009, and the choice between the penguin and the window is harder than ever. We benchmar...

Read More →

Serverless Architecture Considerations: Why Norwegian Businesses Are Moving to VDS and Cloud Hosting in 2009

As we navigate the economic challenges of 2009, the concept of 'Serverless' infrastructure—moving ...

Read More →

The Future is 128-bit: Comprehensive IPv6 Migration Strategies for Norwegian Hosting in 2009

With the IPv4 address pool depleting, Norwegian businesses must prepare for the transition. This gui...

Read More →
← Back to All Posts