Memcached vs Redis: Architecture Decisions for High-Traffic Norwegian Portals
Let’s be honest: your database is the bottleneck. It always is. Whether you are running Magento, a custom PHP application, or a heavy Drupal installation, the moment you hit the front page of Digi.no or launch a marketing campaign in Oslo, MySQL starts locking up. The CPU spikes, the I/O wait climbs, and your users see a white screen.
Caching is the only way to survive. But in 2014, the debate isn't just "should we cache?"—it's "which engine do we trust?" For years, Memcached was the default. It was the reliable, multi-threaded workhorse we all installed via yum without thinking. But Redis has matured aggressively. With the release of Redis 2.8 recently, the landscape has shifted.
I’ve spent the last week migrating a high-traffic travel portal from a pure MySQL setup to a tiered caching architecture. We looked at both. Here is the technical breakdown, stripped of the marketing fluff, on when to use Memcached and when to commit to Redis.
The Old Guard: Memcached
Memcached is beautiful in its simplicity. It does one thing: it stores key-value pairs in memory. It forgets them when you restart the service. It forgets them when it runs out of RAM (LRU). It treats your data like it doesn't matter, which, for a pure cache, is exactly what you want.
The Multi-Threading Advantage
The strongest argument for Memcached in 2014 is its threading model. Memcached is multi-threaded. If you have a massive bare-metal server with 32 cores, Memcached can utilize that scale easily. Redis, by contrast, is single-threaded (mostly). If you throw 100,000 concurrent connections at a single Redis instance, you might hit a CPU bottleneck on a single core before you exhaust the network.
Here is a typical production config for /etc/sysconfig/memcached on a CentOS 6 box:
PORT="11211"
USER="memcached"
MAXCONN="4096"
CACHESIZE="2048"
OPTIONS="-t 8 -l 127.0.0.1"
Note the -t 8 flag. We are explicitly telling it to use 8 threads. On a CoolVDS Enterprise instance, this allows us to hammer the cache without locking.
The Challenger: Redis (and why it's winning)
Redis isn't just a cache; it's a data structure server. Strings, hashes, lists, sets, sorted sets. This allows you to do complex operations in memory that would otherwise kill your database.
The "War Story": Session Persistence
We had a client last month whose load balancer was dropping sticky sessions. Users would log in, click a product, and get logged out. Why? Because the web servers were storing sessions in local files. We moved sessions to Memcached. It worked great until we needed to restart the cache node for a kernel update. Poof. Every active user in Norway was logged out instantly.
This is where Redis wins. Persistence. Redis allows you to snapshot data to disk (RDB) or log every write (AOF). You can restart the server and keep the cache hot.
Configuring Redis for Durability vs. Speed
If you are using Redis purely as a cache, you might disable persistence. But if you use it for queues (Resque/Sidekiq) or sessions, you need safety. Here is the trade-off configuration in redis.conf:
# Snapshotting: Save the DB if 1000 keys changed in 60 seconds
save 60 1000
# Append Only File: Slower, but safer.
# "everysec" is the sweet spot between performance and safety.
appendonly yes
appendfsync everysec
# Max Memory Policy: Act like a cache when full
maxmemory 4gb
maxmemory-policy allkeys-lru
Pro Tip: Never let your Redis instance swap. If Redis swaps to disk, the performance drops from 100,000 ops/sec to 100 ops/sec. At CoolVDS, we configure KVM to ensure your RAM is dedicated and never ballooned or stolen by neighbors.
The Hidden Killer: I/O Latency
This is what the tutorials don't tell you. If you enable AOF (Append Only File) in Redis, the server must fsync to disk every second. If your VPS is hosted on cheap spinning rust (HDD) with "noisy neighbors," that fsync can take 200ms or more. Because Redis is single-threaded, everything stops while it waits for the disk.
Your fast in-memory cache suddenly has the latency of a floppy disk.
We benchmarked this on a standard HDD VPS versus a CoolVDS SSD instance using redis-benchmark:
| Metric | HDD VPS (Competitor) | CoolVDS (Pure SSD) |
|---|---|---|
| SET requests/sec | 32,000 | 85,000 |
| AOF Rewrite Latency | 150ms - 400ms | < 2ms |
| Page Load Impact | Noticeable Stutter | Instant |
When running high-performance caching layers, the underlying storage technology is critical. You cannot run a persistent Redis instance reliably on shared HDD storage.
Implementation: PHP and Redis
In the PHP world (5.4/5.5), you have two main drivers: Predis (pure PHP) and phpredis (C extension, installed via PECL). Always use phpredis for production. The overhead of parsing the protocol in PHP adds up.
Here is how you handle a basic cache-aside pattern with the phpredis extension:
<?php
$redis = new Redis();
// Connecting to localhost over socket is faster than TCP
$redis->connect('/var/run/redis/redis.sock');
$key = 'user_profile_1234';
$data = $redis->get($key);
if (!$data) {
// Cache Miss: Expensive DB Query
$sql = "SELECT * FROM users WHERE id = 1234";
$data = $db->query($sql)->fetch();
// Save to Redis with 10 minute TTL
$redis->set($key, serialize($data));
$redis->expire($key, 600);
} else {
$data = unserialize($data);
}
print_r($data);
?>
Legal & Latency: The Norwegian Context
Hosting externally (Germany or US) might seem cheaper, but latency adds up. The round-trip time (RTT) from Oslo to Frankfurt is decent (~25ms), but from Oslo to New York, it's 90ms+. If your application makes 10 sequential calls to the cache, that latency compounds.
Furthermore, we have the Personopplysningsloven (Personal Data Act) and the Data Inspectorate (Datatilsynet) to consider. Keeping user sessions and cached data containing PII (Personally Identifiable Information) on servers physically located in Norway or the EEA is not just good performance—it’s good compliance. While the US Safe Harbor framework exists, many Norwegian CTOs prefer the legal certainty of data residing on domestic soil.
Conclusion: Which one to pick?
- Choose Memcached if you need a simple, volatile, multi-threaded cache for small objects and you are scaling vertically on massive CPUs.
- Choose Redis for almost everything else. If you need persistence, complex data types (lists/sets), or replication, Redis is the superior modern choice.
But remember: a fast cache on a slow disk is a time bomb. If you are deploying Redis with persistence, ensure your underlying infrastructure can handle the I/O pressure.
Don't let slow I/O kill your application's responsiveness. Spin up a CoolVDS SSD Instance today and see the difference sub-millisecond disk latency makes for your Redis setup.