The "Serverless" Mirage: Why Your Stack Still Needs Iron
It has been a few months since AWS announced Lambda generally available, and the hype train has left the station. The promise? Upload code, forget the OS, pay per millisecond. It sounds utopian. But as someone who has debugged production outages at 3 AM, I read "Serverless" and I see "loss of control."
If you are building for the Nordic market, routing every event through a data center in Ireland or Frankfurt introduces latency that feels sluggish. Worse, you are tying your application logic to a proprietary vendor API. If they hike the price or deprecate a runtime, you are stuck.
The smarter play for 2015? The "Serverless" architecture pattern, hosted on your own high-performance VPS.
The Architecture: Event-Driven Workers
You don't need a cloud giant to build decoupled systems. The core benefit of serverless is the event-driven nature, not the lack of servers. We can replicate this using Docker (currently v1.6) and a message broker. This gives you the scalability of microservices with the raw IO performance of local infrastructure.
The Stack
- Queue: Redis (simple, fast) or RabbitMQ (robust).
- Compute: Docker containers running Node.js or Python.
- Infrastructure: KVM-based VPS (like CoolVDS) to prevent "noisy neighbor" CPU stealing.
Step 1: The Message Broker
Latency is the enemy. By hosting your Redis instance in Oslo (or as close to your user base as possible), you cut round-trip times (RTT) drastically compared to a trip to AWS `eu-west-1`.
Configure your redis.conf to handle the churn of job queues. Standard settings often save to disk too frequently for a queue that is meant to be transient.
# /etc/redis/redis.conf
# Don't snapshot too aggressively if it's just a queue
save 900 1
save 300 10
# Maximize memory usage policy
maxmemory-policy noeviction
Step 2: The Worker Container
Instead of a Lambda function, we build a lightweight Docker container. This container does one thing: it listens to the queue. Using Node.js (0.12 or io.js), we can handle thousands of concurrent operations with low overhead.
Here is a basic worker pattern using the kue library:
var kue = require('kue');
var jobs = kue.createQueue({
redis: {
host: '10.0.0.5', // Internal IP on your Private Network
port: 6379
}
});
jobs.process('image_resize', function(job, done){
var image = job.data.image;
// Perform resizing logic here
console.log('Processing ' + image);
done();
});
Deploying this is trivial with Docker. You don't need a complex orchestration tool yet; a simple startup script works for most deployments under 50 nodes.
docker run -d --name worker-01 --restart=always -v /mnt/data:/data my-worker-image
The Hardware Reality: Why KVM Matters
In a "Serverless" public cloud environment, your code runs on shared slices of hardware. You have zero guarantee of consistent CPU performance. This is called the "Noisy Neighbor" effect. If another tenant on that physical host spikes their usage, your function slows down.
This is where CoolVDS differs significantly. We use KVM (Kernel-based Virtual Machine) virtualization. Unlike OpenVZ containers used by budget hosts, KVM allocates dedicated RAM and CPU interrupts to your kernel.
Pro Tip: When processing queues, Disk I/O is often the hidden bottleneck. Standard SATA SSDs top out around 500 MB/s. CoolVDS offers next-gen NVMe storage on select plans, which is practically mandatory if you are handling high-throughput message logging or database writes simultaneously.
Data Privacy: The Norwegian Context
We are still navigating the fallout of data privacy discussions in Europe. While Safe Harbor is currently valid, the winds are changing. Hosting your data processing pipeline on servers physically located in Norway (or under strict European jurisdiction) simplifies compliance with the Datatilsynet recommendations. You know exactly where the drive is spinning.
Conclusion
Don't confuse the architecture with the vendor. You can build resilient, event-driven, scalable systems today without handing the keys to a US cloud provider. By combining the flexibility of Docker with the raw power of CoolVDS KVM instances, you get the best of both worlds: modern architecture and bare-metal performance.
Ready to lower your latency? Deploy a KVM instance with CoolVDS in under 55 seconds and take back control of your stack.