Console Login

The 'Serverless' Mirage: Building Event-Driven Microservices That Actually Work (2014 Edition)

The 'Serverless' Mirage: Building Event-Driven Microservices That Actually Work

It has been exactly one month since Amazon announced "Lambda" at re:Invent, and the blogosphere is already drowning in hot takes. The promise? Run code without provisioning servers. The reality? You are just renting execution time on someone else's opaque, black-box infrastructure while surrendering control over your stack.

I’ve been managing systems from Oslo to Tromsø for over a decade, and if there is one thing I’ve learned, it’s that abstraction always comes with a tax. Sometimes that tax is latency. Sometimes it’s cost. And frequently, here in Norway, it’s legal compliance.

While the concept of "Function as a Service" (FaaS) is intriguing, the Battle-Hardened DevOps approach to 2014 requires pragmatism. We don't need to wait for AWS to mature. We can build scalable, decoupled, "serverless-style" architectures right now using tools we trust: Docker, Message Queues, and raw KVM performance.

The Pattern: Decoupling Compute form State

The core philosophy behind this new wave of architecture isn't about deleting servers; it's about decoupling. In a traditional LAMP stack, your web server handles the request, processes the image, sends the email, and writes to the DB. If one part locks up, the whole user experience degrades.

To fix this, we break the application into:

  1. The Producer (Web Tier): Accepts the request and offloads it instantly.
  2. The Broker (Message Queue): Holds the state.
  3. The Consumer (Worker Tier): Processes the job asynchronously.

This is how we scale. And you don't need a proprietary cloud function to do it. You need a solid VPS running a queue.

Step 1: The Broker (Redis vs. RabbitMQ)

For most high-performance setups I deploy in Norway, I lean towards Redis for speed or RabbitMQ for reliability. Since we are obsessing over latency, let's look at a Redis configuration optimized for a high-throughput environment.

Standard Redis configs are often too conservative. If you are running this on a CoolVDS instance with SSDs, you can push the memory limits. Here is a snippet from a redis.conf I deployed last week for a media processing client:

# /etc/redis/redis.conf

# Snapshotting: Save less frequently to reduce I/O blocking on busy workers
save 900 1
save 300 10

# Maximize connection limits for microservices
maxclients 10000

# TCP Keepalive (Critical for long-lived worker connections)
tcp-keepalive 60

# Memory Policy: Don't crash, just evict old volatile keys
maxmemory-policy volatile-lru
Pro Tip: Never expose your Redis port (6379) to the public internet. Use iptables to restrict access strictly to your worker nodes' internal IPs. On CoolVDS, we utilize the private backend network for this to avoid metering bandwidth and to lower latency to sub-millisecond levels.

Step 2: The Worker (Dockerizing the Logic)

Docker hit version 1.3 recently, and it has fundamentally changed how I ship code. Instead of wrestling with Python virtualenvs or Ruby gem versions on the host, we containerize the worker.

Here is a practical Python worker pattern using Celery. This mimics "serverless" functions—it sits idle until a job arrives, executes it, and sleeps. But unlike Lambda, you control the timeout limits and the libraries.

# tasks.py
from celery import Celery
import time

# Connect to the Redis instance running on our Data Node
app = Celery('tasks', broker='redis://10.0.0.5:6379/0')

@app.task
def crunch_data(data_id):
    print(f"Processing {data_id}...")
    # Simulate heavy CPU load
    time.sleep(5)
    return f"Done with {data_id}"

To run this efficiently, we don't just start a script. We use Docker to ensure environment consistency between your dev laptop and the production server.

# Dockerfile
FROM python:2.7-slim

WORKDIR /app
COPY . /app

RUN pip install celery redis

# Run the worker with concurrency matching your CoolVDS CPU cores
CMD ["celery", "-A", "tasks", "worker", "--loglevel=info", "--concurrency=4"]

The Infrastructure Reality: Why "Cloud" Isn't Enough

Here is the controversy: Public Cloud providers (like AWS or Azure) often sell you "vCPUs" that are heavily throttled. If your worker wakes up to process a heavy video transcode, you might get hit with "CPU Steal" because your noisy neighbor on the host machine is mining Bitcoin.

This is where CoolVDS differs significantly.

Feature Typical Shared Hosting / OpenVZ CoolVDS (KVM)
Isolation Kernel shared (insecure) Full Hardware Virtualization
Storage I/O Shared, unpredictable latency Dedicated SSD/NVMe throughput
Docker Support Often broken or hacky Native support (Custom Kernels)

When running a message queue architecture, disk I/O latency is the killer. If your queue cannot persist messages to disk fast enough because the hypervisor is choked, your "microservices" grind to a halt. We use KVM to ensure that the resources you pay for are physically reserved for you.

The Norwegian Context: Data Sovereignty

We cannot ignore the elephant in the room: Privacy. Since the Snowden leaks last year, and the ongoing scrutiny of the "Safe Harbor" agreement, relying on US-based cloud giants is becoming a legal minefield for Norwegian companies.

The Norwegian Data Inspectorate (Datatilsynet) is increasingly strict about where personal data (personopplysninger) resides. If you use a US-managed "serverless" platform, you rarely know exactly where that code executes or where the temp files are stored.

By hosting your worker nodes on CoolVDS in our Oslo data center, you guarantee:

  • Low Latency: Direct peering with NIX (Norwegian Internet Exchange) means <2ms ping to most Norwegian ISPs.
  • Compliance: Your data physically remains in Norway, simplifying adherence to the Personal Data Act (Personopplysningsloven).
  • Predictability: No hidden bandwidth bills or API gateway fees.

Optimizing the Kernel for Heavy Workloads

If you are deploying this architecture today, you need to tune the Linux kernel. Default settings are not designed for thousands of concurrent micro-connections. Update your /etc/sysctl.conf with these values:

# Increase system file descriptor limit
fs.file-max = 100000

# Allow more connections to be handled simultaneously
net.core.somaxconn = 4096

# Reuse sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1

# Fast recycling (be careful with this one behind NAT, but safe inside our VLAN)
net.ipv4.tcp_tw_recycle = 1

Run sysctl -p to apply. These tweaks alone can double the throughput of your worker nodes.

Conclusion

Serverless functions are an interesting concept for the future, but in 2014, your business needs reliability, not experiments. A well-architected cluster using Redis, Docker, and Python/Node.js offers you the same modularity as FaaS, but with zero latency penalties and total cost control.

Don't let your infrastructure be a black box. Build it on iron you can trust.

Ready to deploy your worker cluster? Spin up a high-performance KVM instance on CoolVDS today and experience the stability of dedicated SSD resources.