The "Serverless" Myth: Why High-Performance Architecture Still Needs Iron
Date: February 24, 2014
Let’s cut through the noise. If I hear one more developer at a hackathon in Oslo talk about "Serverless" deployment because they pushed code to Heroku or Parse, I might just `rm -rf /` my own workstation. There is no such thing as serverless. There is only someone else's computer—and usually, that computer is overworked, noisy, and located too far away from your Norwegian customers.
I get the appeal. You want to deploy code, not manage kernels. You want to break your monolith into the trendy new "microservices" pattern Netflix is blogging about. But here is the hard truth: when you abstract away the OS, you abstract away your ability to optimize I/O. And in 2014, I/O is the bottleneck that kills applications.
At CoolVDS, we are seeing a massive migration of CTOs coming back from the PaaS clouds. Why? Because they realized that paying $0.05 per hour for a slice of a shared container with erratic latency is a bad trade compared to owning a dedicated KVM slice with raw PCIe SSD power.
The Architecture: "Sane Man's Serverless"
You don't need a Platform-as-a-Service to get agility. You need a Service-Oriented Architecture (SOA) backed by automation tools like Puppet or Ansible, running on high-performance virtual hardware. This gives you the "push-to-deploy" feel without the "pray-for-uptime" anxiety.
Here is the battle-tested pattern we are deploying for high-traffic e-commerce sites across Scandinavia this month.
1. The Edge Router (Nginx 1.4)
Forget the default load balancers provided by US cloud giants. They lack the granular control you need for varying traffic loads. We use Nginx 1.4 running on a minimal Linux distro (CentOS 6 or Ubuntu 12.04 LTS).
The secret isn't just installing Nginx; it's tuning the worker_processes and file descriptors. Most default configs limit you to 1024 connections. On a CoolVDS instance, we have enough RAM to push that to 65,535.
worker_processes auto;
worker_rlimit_nofile 100000;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
Pro Tip: If your upstream application servers (PHP-FPM or Python/uWSGI) are on the same private network (which they are in our Oslo datacenter), enable keepalive connections to upstream. The TCP handshake overhead adds up when you are doing 5,000 requests per second.
upstream backend {
server 10.0.0.5:8080;
server 10.0.0.6:8080;
keepalive 64;
}
2. The Asynchronous Worker Pattern
The biggest mistake I see in PHP and Ruby apps is processing data during the web request. If a user uploads an image, do not resize it while the browser spins! This is where the "Serverless" concept of background workers actually makes sense—but you should run it yourself using Redis 2.8.
Redis 2.8 (released just a few months ago) is rock solid. We use it as a broker. The web server pushes a job ID to a list, and a background worker pops it off.
Producer (PHP):
$redis->lpush('image_queue', json_encode(['id' => 123, 'path' => '/tmp/img.jpg']));
Consumer (Python worker running on a separate thread):
import redis
import json
r = redis.Redis(host='localhost', port=6379, db=0)
while True:
# brpop blocks until an item is available - zero CPU waste
queue, data = r.brpop('image_queue')
job = json.loads(data)
process_image(job['path'])
Why run this on CoolVDS? Because Redis is entirely in-memory. If your "Cloud" provider overcommits RAM and starts swapping to disk, your Redis performance falls off a cliff. We guarantee dedicated RAM allocation.
3. The Persistence Layer: PCIe Flash vs. Spinning Rust
This is the year 2014. If your database is running on 7200 RPM SATA drives, you are negligent. The I/O wait times on spinning disks are unacceptable for modern web apps.
Many providers claim "SSD Caching," which is just a hybrid trick. Real performance comes from Pure SSD or the emerging PCIe Flash storage (often seen in Fusion-io cards). At CoolVDS, our storage backend handles random write operations (IOPS) 50x faster than standard SAS drives.
To really utilize this, you must tune your Linux kernel I/O scheduler. On our images, we often switch from cfq to deadline or noop for SSDs, because the drive handles the sorting logic internally.
# Check your scheduler
cat /sys/block/sda/queue/scheduler
[noop] deadline cfq
The Norwegian Context: Data Sovereignty
We are seeing tighter scrutiny from Datatilsynet (The Norwegian Data Protection Authority). While the Safe Harbor agreement currently allows data transfer to the US, many Norwegian enterprises are rightly nervous about the NSA revelations from last year.
Keeping your architecture local isn't just about latency (though pinging Oslo from Oslo in 2ms is nice); it's about the Personal Data Act (Personopplysningsloven). When you use a massive US-based PaaS, you often don't know exactly where your data sits physically. Is it in Dublin? Is it in Virginia? With CoolVDS, your data sits in a rack you can drive to.
KVM vs. Containers: The Isolation War
There is a lot of buzz about LXC (Linux Containers) right now, and a new tool called Docker is making waves (version 0.8 just dropped). It's exciting tech, but for production today, KVM (Kernel-based Virtual Machine) is the king of isolation.
| Feature | Shared PaaS (Containers) | CoolVDS (KVM) |
|---|---|---|
| Kernel | Shared (Security risk) | Dedicated (Customizable) |
| Neighbors | "Noisy" (CPU Steal) | Isolated Resources |
| Filesystem | Ephemeral (Disappears on reboot) | Persistent Block Storage |
If a neighbor on a shared PaaS gets DDoS'd, your latency spikes. On KVM, their kernel panic is their problem, not yours. We use KVM to ensure that when you buy 4 vCPUs, you get 4 vCPUs, not "burstable credits" that run out when you need them most.
Conclusion
The pattern for 2014 is clear: decouple your services, use asynchronous queues for heavy lifting, but own the infrastructure. Don't rent a black box API and hope it scales. Build on a foundation of transparent, high-speed compute.
If you are ready to stop debugging latency spikes and start shipping faster code, it's time to upgrade your metal.
Deploy a pure-SSD KVM instance on CoolVDS today. Your Nginx config will thank you.