Stop the I/O Thrashing: Scaling PHP Sessions with Redis 2.0
If I see one more /tmp partition filled with millions of sess_* files, I might just unplug the server myself. It is 2010. We are building high-traffic applications, yet so many systems administrators are still relying on the default file-based session handler in PHP. It works fine for a blog with ten visitors. But when you hit 500 concurrent users? You are asking your hard drive to seek thousands of times per second just to figure out if a user is logged in.
The result is always the same: iowait spikes, the load average climbs, and your site feels sluggish even though the CPU is idling. I have seen Magento stores melt down not because of the database, but because the file system locking overhead on session files choked the I/O subsystem.
You need to move sessions into RAM. But unlike Memcached, we actually want to keep that data if the daemon restarts. This is why Redis 2.0 is becoming the standard for serious deployments.
The Problem with File-Based Sessions
By default, PHP writes session data to disk. On a standard HDD (even 15k RPM SAS drives), random write performance is physically limited. When you have a cluster of web servers, it gets worse. You either have to use sticky sessions (which breaks load balancing efficiency) or mount a central NFS share for sessions.
Do not use NFS for sessions. Locking over a network file system is a recipe for latency spikes. I've debugged setups where the NFS locking overhead added 200ms to every single page load. That is unacceptable.
Why Not Memcached?
Memcached is fast. We use it extensively for caching database queries. But it is purely volatile. If your Memcached node crashes or you need to restart the service to apply a patch, every single active user on your site gets logged out instantly. For an e-commerce site, that is an abandoned cart disaster.
Enter Redis 2.0
Redis (Remote Dictionary Server) gives us the speed of in-memory storage with the persistence of a database. With the release of Redis 2.0 earlier this year, we gained Virtual Memory support and better replication, making it robust enough for production session handling.
Here is the architecture we are aiming for:
- Speed: Sessions are read from RAM. No disk seek latency.
- Persistence: Redis snapshots data to disk asynchronously (RDB) or logs every write (AOF), so reboots don't kill sessions.
- Structure: Unlike the blob storage of Memcached, Redis understands data structures, allowing us to expire keys automatically.
Configuration Guide
Assuming you are running a standard RHEL/CentOS 5 or Debian Lenny environment, you first need the phpredis extension. Don't rely on pure PHP client libraries; the compiled C extension is significantly faster.
Once compiled and loaded, tell PHP to swap handlers. You can do this in your php.ini or strictly inside your Apache vhost configuration to isolate it per site.
; In /etc/php5/apache2/php.ini
session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?weight=1&timeout=2.5"
If you are running a high-availability setup (which you should be), you can define a master/slave setup directly in the path, though I prefer handling that via a TCP proxy or virtual IP.
Pro Tip: In yourredis.conf, pay attention tomaxmemory. If Redis fills up, it will stop accepting writes. Set a hard limit and use an eviction policy likevolatile-lruto ensure old sessions are purged to make room for new ones without crashing the service.
The Hardware Reality
Redis is fast, but it is single-threaded. This means CPU speed matters more than core count. A 3.0GHz dual-core handles Redis better than a 2.0GHz quad-core. Furthermore, while Redis runs in RAM, the persistence mechanism (AOF/RDB) relies heavily on disk write speeds when dumping memory to disk.
This is where the underlying infrastructure bites you. On cheap VPS providers, "disk" is often a shared slice of a SATA array. When your neighbor runs a backup, your Redis persistence lags, and your RAM fills up.
At CoolVDS, we have moved entirely to Enterprise SSD storage for our high-performance tier. In 2010, SSDs are still a premium, but for database and Redis workloads, the IOPS advantage is mathematical, not marketing. We see random write speeds 50x faster than traditional SAS arrays. If you are running Redis with AOF (Append Only File) enabled for maximum data safety, you cannot afford to be on spinning rust.
Data Sovereignty in Norway
Latency isn't the only concern. Under the Norwegian Personopplysningsloven (Personal Data Act) and the EU Data Protection Directive, you are responsible for where your user data lives. Session IDs often link to personal identifiers.
Hosting your Redis instance on a cloud server in the US introduces not just 120ms of latency (killing your snappy feel), but also legal headaches regarding Safe Harbor frameworks. Keeping the data inside a Norwegian datacenter ensures you are compliant with Datatilsynet's guidelines and provides the lowest possible latency to your Nordic user base.
Benchmark: File vs. Redis
| Metric | File-Based (Ext3) | Redis 2.0 (CoolVDS SSD) |
|---|---|---|
| Read Latency | 0.5ms - 50ms (variable) | < 0.1ms (consistent) |
| Concurrent Writes | Blocking (File Locks) | Non-Blocking (Atomic) |
| Max Throughput | ~400 req/sec | ~25,000+ req/sec |
Final Thoughts
Stop treating sessions as an afterthought. They are the glue of your user experience. If you are scaling out to multiple web nodes, or just tired of high I/O wait times, switch to Redis.
And if you need a platform that delivers the raw single-thread CPU performance and SSD I/O required to keep Redis humming, verify it yourself. Spin up a Debian instance on CoolVDS today and run redis-benchmark. The numbers don't lie.