The Silent Killer of High-Traffic Sites
It usually happens on a Tuesday morning. Marketing sends out a newsletter, traffic spikes, and suddenly your load average shoots through the roof. You check top. CPU is fine. Memory is fine. But wa (I/O wait) is hovering at 40%.
Your server isn't processing requests; it's waiting for the hard disk.
If you are running a standard LAMP stack (Linux, Apache, MySQL, PHP), the culprit is often the humblest part of your architecture: PHP Sessions. By default, PHP writes session data to files in /var/lib/php/session. On a busy site with 5,000 active users, that is 5,000 small files being opened, locked, read, written, and closed constantly. Even with 15k RPM SAS drives in RAID 10, the mechanical seek times will eventually kill your response time.
Enter Redis: The New Challenger
For years, Memcached has been the go-to for offloading this work. But there is a new project gaining serious traction in the Ruby and Python communities that is finally usable for us PHP sysadmins: Redis.
Unlike Memcached, Redis allows for persistence. If your VDS reboots, you don't necessarily log out every single user on your platform. It keeps the dataset in RAM but snapshots to disk asynchronously.
Configuration: moving from Files to TCP
Getting this running on a CentOS 5 box requires compiling from source, as the repos are behind. Once you have the Redis daemon running on port 6379, you need the PHP extension. Don't rely on the default PECL beta unless you've stress-tested it; we prefer compiling the phpredis extension manually for stability.
Here is the critical change in your php.ini (or your virtual host config):
; Old slow file based
; session.save_handler = files
; session.save_path = "/var/lib/php/session"
; The new hotness
session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379"
This bypasses the filesystem entirely. No file locks. No disk seeks. Just raw TCP overhead and RAM speed.
Pro Tip for High Availability: If you have multiple web heads, point them all to a single CoolVDS instance running Redis via the private network (ETH1). This gives you a poor man's sticky sessions without the load balancer headaches.
The "War Story": E-commerce in Oslo
We recently migrated a Magento installation for a client based here in Oslo. They were running on a shared hosting environment (naming no names) where disk I/O was throttled. Checkout took 8 seconds. We moved them to a CoolVDS Business Plan with dedicated RAM and implemented the Redis backend for sessions and cache.
The result? Checkout time dropped to 450ms. Conversions went up 15% overnight.
Why Underlying Hardware Matters (The CoolVDS Difference)
Redis is entirely in-memory. This makes it incredibly fast, but also sensitive to memory ballooning.
Many VPS providers in Norway use OpenVZ virtualization, which allows them to oversell RAM. If the host node runs out of memory, your Redis instance gets killed by the OOM (Out Of Memory) killer. Your users get logged out. Chaos ensues.
At CoolVDS, we prioritize Xen virtualization. This guarantees that the RAM you pay for is actually assigned to your kernel. When you allocate 2GB to Redis, it stays there. Furthermore, our datacenters in Oslo connect directly to NIX (Norwegian Internet Exchange), ensuring that latency between your app users and your server is practically non-existent.
Compliance Note (Datatilsynet)
For our Norwegian clients, storing session data in RAM on servers physically located in Norway helps with compliance regarding the Personopplysningsloven (Personal Data Act). You know exactly where your customer data lives—not floating in some ambiguous cloud in the US.
Final Thoughts
Disk I/O is the bottleneck of the Web 2.0 era. Stop letting spinning platters dictate your application speed. Redis is still young (currently v1.0), but for session handling, it is already production-ready.
Ready to test the difference? Spin up a CoolVDS instance today. We offer a clean Linux environment perfect for compiling custom stacks without the "noisy neighbor" issues you find elsewhere.