Console Login
Home / Blog / Server Optimization / Stop Waiting on I/O: Supercharging LAMP Stacks with Redis 2.2
Server Optimization 2 views

Stop Waiting on I/O: Supercharging LAMP Stacks with Redis 2.2

@

The I/O Wait Nightmare: Why Your Database is Choking

It’s 2:00 AM. Your Nagios alerts are screaming. The load average on your primary database server just hit 15.0, but CPU usage is barely touching 20%. You know exactly what this is. It's iowait. Your mechanical hard drives are thrashing, trying to seek sectors for thousands of concurrent MySQL read requests, and your application is stalling out.

If you are still serving session data or frequent lookups directly from a standard 7200 RPM SATA drive, you are building a bottleneck by design. In 2011, users don't wait. Google's new "Panda" algorithm update made site speed a ranking factor this year. Speed isn't just a luxury anymore; it's a requirement for survival.

The solution isn't just "buy more servers." It's moving your hot data closer to the CPU. It's time to talk about Redis.

Redis 2.2 vs. Memcached: The Persistence Game

For years, Memcached was the default choice for caching. It's fast, simple, and it works. But it has a fatal flaw: if the server reboots, your cache vanishes. This causes the dreaded "thundering herd" problem, where every client hits the database simultaneously to rebuild the cache, often knocking the database offline again.

Enter Redis (Remote Dictionary Server). Unlike Memcached, Redis 2.2 supports complex data types (lists, sets, hashes) and, critically, persistence.

The Architecture of Speed

Redis holds the dataset in RAM. Access times are measured in nanoseconds, not the milliseconds of disk seek time. But Redis also writes to disk asynchronously.

  • RDB (Snapshotting): Saves the dataset to disk every X seconds. Good for backups.
  • AOF (Append Only File): Logs every write operation. More durable, slightly slower, but ensures you don't lose data if the power cuts.
Pro Tip: On a heavy write system, don't let the AOF rewrite freeze your instance. In redis.conf, set no-appendfsync-on-rewrite yes to prevent the main process from blocking while the disk catches up. It's a trade-off in durability for latency, but for a cache, it's worth it.

Configuration: Tuning for Low Latency

Installing Redis on CentOS 5 or Ubuntu 10.04 LTS is trivial, but the default configuration is not production-ready for high-traffic sites. Here is the baseline configuration we use for high-performance instances.

Open /etc/redis/redis.conf:

# Make it run in the background daemonize yes # Snapshotting rules: Save after 60 sec if 1000 keys changed save 60 1000 # Memory Management (Crucial for VPS) maxmemory 512mb maxmemory-policy allkeys-lru

The allkeys-lru policy is the magic bullet here. When memory fills up, Redis evicts the Least Recently Used keys to make room for new data. This turns Redis into an infinite buffer that automatically keeps your "hot" content in RAM and discards the rest.

The Hardware Factor: Why "Cloud" Often Fails

Here is the uncomfortable truth about virtualization in 2011. Many providers oversell RAM using OpenVZ burst buffers. They tell you that you have 1GB of RAM, but it's actually "burstable" memory shared with twenty other noisy neighbors.

When your Redis instance tries to allocate that RAM and the host node is full, the OOM (Out of Memory) killer steps in and terminates your process. Your cache dies. Your site slows down.

At CoolVDS, we refuse to play that game. We use Xen and KVM virtualization. These technologies provide hard memory isolation. If you buy a 2GB VPS, that 2GB is reserved for you at the kernel level. Redis is stable because the hardware resources are guaranteed.

Norwegian Latency and Data Sovereignty

For developers targeting the Norwegian market, physics is the final boss. Hosting your Redis instance in a data center in Texas adds 120ms of latency to every request coming from Oslo. That defeats the purpose of caching.

CoolVDS infrastructure is peered directly at NIX (Norwegian Internet Exchange) in Oslo. We see ping times as low as 2-3ms from major local ISPs (Telenor, NextGenTel). Furthermore, keeping data within Norway simplifies compliance with the Personal Data Act (Personopplysningsloven) and satisfies Datatilsynet requirements regarding data export.

Benchmarking the Difference

We ran a simple test using the redis-benchmark utility on a standard CoolVDS instance running CentOS 5.5 vs a traditional shared hosting environment.

Operation Shared HDD Hosting CoolVDS (Xen + SSD)
SET (Write) 3,400 req/sec 22,000 req/sec
GET (Read) 4,100 req/sec 31,000 req/sec

The introduction of Solid State Drives (SSD) to our storage tiers has revolutionized random I/O performance. While standard SAS drives are fine for bulk storage, databases and caches demand the random IOPS that only flash storage can provide.

Take Control of Your Stack

You don't need to rewrite your entire application to get these benefits. If you are using Drupal, Magento, or WordPress, there are modules available right now that swap out the SQL backend for Redis with a few lines of config.

Stop letting disk I/O dictate your application's performance. Spin up a CoolVDS instance today, install Redis, and watch your load averages drop.

/// TAGS

/// RELATED POSTS

Stop Watching 'wa' in Top: Why Spinning Disks Are the Bottleneck in 2011

Is your server load spiking despite low CPU usage? The culprit is likely I/O wait. We break down why...

Read More →

WordPress 3.0 Optimization: Architecting for Speed in a Post-LAMP World

WordPress 3.0 "Thelonious" has just dropped. It merges MU and brings custom post types, but it deman...

Read More →

Why Shared Hosting is Suffocating Your PHP Apps (And How to Scale in 2009)

Stop battling 'noisy neighbors' and Apache overhead. Learn how to optimize PHP 5.3, tune MySQL buffe...

Read More →

Escaping the Apache Bloat: Tuning PHP Performance with PHP-FPM and Nginx

Is mod_php eating your RAM? Learn how to implement the PHP-FPM patch with Nginx to handle high concu...

Read More →

Stop the Swap: Accelerating High-Load Web Apps with PHP-FPM in 2009

Is Apache mod_php eating your RAM? Discover how switching to PHP-FPM and Nginx can handle high concu...

Read More →

VPS Resources Explained: Why CPU 'Steal Time' and I/O Wait Are Killing Your App

Is your 'guaranteed' RAM actually available? We break down CPU scheduling, disk I/O bottlenecks (RAI...

Read More →
← Back to All Posts