Console Login

Decoupling the Monolith: Event-Driven Architectures and the Rise of "Serverless" in 2014

Decoupling the Monolith: Event-Driven Architectures and the Rise of "Serverless" in 2014

Let’s be honest. The buzz coming out of Las Vegas last month regarding AWS and their new "Lambda" service is interesting, but for those of us managing production workloads in Oslo or Bergen, it feels a bit like science fiction. They are calling it "Serverless." While the idea of uploading code without provisioning a server is seductive, the reality for a pragmatic DevOps engineer in late 2014 is different.

We don't need magic black boxes hosted in US data centers; we need control, performance, and compliance with the Norwegian Personal Data Act (Personopplysningsloven). However, the pattern behind this buzz—decoupling your heavy processing from your web frontend—is absolutely critical. If your Magento store or custom PHP application is still handling image resizing or PDF generation within the main request thread, you are doing it wrong.

In this post, we will tear down the "Serverless" concept into what it really is: Event-Driven Microservices. I will show you how to build a robust, asynchronous worker architecture using tools available today like RabbitMQ and Supervisord, running on high-performance KVM instances right here in Norway.

The Problem: Synchronous Blocking is the Enemy

I recently audited a client's system running on a legacy shared hosting platform. Their checkout process took 8 seconds. Why? because they were sending transactional emails and generating invoices synchronously before sending the HTTP 200 OK response to the browser. Under high load, their Apache workers starved, and the site went down.

The solution isn't just "more RAM." It's architecture. You need to offload these tasks to background workers. In the cloud buzzword bingo, this is the precursor to Serverless functions. But you can build it yourself, cheaper and faster, on a CoolVDS instance.

The Stack: RabbitMQ and Supervisord

To implement this pattern, we need three components:

  1. The Producer: Your web app (PHP, Python, Node.js) pushing a "job" to a queue.
  2. The Broker: A message queue (RabbitMQ is the industry standard in 2014).
  3. The Consumer (Worker): A script that runs in the background, picks up jobs, and executes them.

Step 1: The Message Broker

First, we need a reliable message broker. Redis is great for caching, but for durable queues where message loss is unacceptable, I prefer RabbitMQ. On a Debian 7 (Wheezy) or Ubuntu 14.04 LTS instance, installation is straightforward. Do not run this on OpenVZ; you want the kernel isolation of KVM to ensure memory stability.

# Add the RabbitMQ repository for the latest version
echo 'deb http://www.rabbitmq.com/debian/ testing main' | sudo tee /etc/apt/sources.list.d/rabbitmq.list
wget -O- https://www.rabbitmq.com/rabbitmq-signing-key-public.asc | sudo apt-key add -

# Update and Install
sudo apt-get update
sudo apt-get install rabbitmq-server

# Enable the management plugin (essential for monitoring)
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
Pro Tip: By default, the `guest` user can only access RabbitMQ via localhost. If you are building a cluster where your web server is on one VPS and your queue is on another (highly recommended for redundancy), create a dedicated vhost and user immediately.

Step 2: The "Serverless" Function (The Worker)

This is where the magic happens. Instead of a proprietary cloud function, we write a simple Python script using `pika`. This script acts as our permanent worker. It listens for events and processes them. It does one thing and does it well.

import pika
import time
import json

# Connection parameters
connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
channel = connection.channel()

# Declare the queue. 'durable=True' ensures the queue survives a RabbitMQ restart.
channel.queue_declare(queue='invoice_generation', durable=True)

print ' [*] Waiting for invoice jobs. To exit press CTRL+C'

def callback(ch, method, properties, body):
    payload = json.loads(body)
    print " [x] Processing Invoice ID: %r" % payload['id']
    
    # Simulate heavy processing (PDF generation)
    time.sleep(payload['complexity'] * 0.5)
    
    print " [x] Done"
    # Acknowledge the message so RabbitMQ removes it
    ch.basic_ack(delivery_tag = method.delivery_tag)

# Fair dispatch: don't give a worker more than 1 message at a time
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback,
                      queue='invoice_generation')

channel.start_consuming()

Step 3: Keeping it Alive with Supervisord

A true "Serverless" platform restarts functions automatically. On a VPS, we use `supervisord`. It’s a process control system that ensures your worker scripts are always running. If the script crashes, Supervisord restarts it instantly.

[program:invoice-worker]
command=python /opt/workers/invoice_worker.py
directory=/opt/workers
autostart=true
autorestart=true
stderr_logfile=/var/log/invoice-worker.err.log
stdout_logfile=/var/log/invoice-worker.out.log
user=www-data
numprocs=4
process_name=%(program_name)s_%(process_num)02d

Notice `numprocs=4`. This automatically spawns 4 concurrent workers. This is how you scale. Need more throughput? Increase the number or spin up a new CoolVDS instance and deploy the workers there.

Infrastructure Matters: The "Noisy Neighbor" Problem

You might ask, "Can't I run this on my cheap shared hosting?" No. Long-running processes like these workers are often killed by the OOM (Out of Memory) killer on shared environments or strictly limited by `ulimit` policies.

Furthermore, message queues require low latency. While standard SSDs are becoming common, the I/O wait time on oversubscribed platforms can cause message backlogs. This is where CoolVDS shines. We use KVM virtualization, which means your RAM is reserved, not shared. When your worker needs to crunch a 50MB PDF, the CPU cycles are yours.

Data Sovereignty in the Post-Snowden Era

We cannot ignore the elephant in the room. Since the Snowden leaks last year, relying on US-owned cloud infrastructure (even if they claim the datacenter is in Dublin or Frankfurt) is a legal grey area for Norwegian businesses handling sensitive customer data. The Safe Harbor agreement is under heavy scrutiny.

By building your event-driven architecture on VPS Norway infrastructure, you ensure that the physical disks storing your message queues and databases are located in Oslo. You report to Datatilsynet, not the NSA. For a CTO, that peace of mind is worth more than any "auto-scaling" gimmick.

Performance Tuning for 2015

As we approach 2015, the demand for real-time interaction is growing. If you are deploying this architecture, here are a few final optimizations specific to our environment:

Parameter Recommended Value Why?
vm.swappiness 10 Prevents Linux from swapping out your worker processes to disk too early.
noatime mount flag Enabled Reduces disk I/O by not updating file access times. Crucial for high-throughput queues.
Network Latency < 2ms to NIX Essential for fast API responses to the frontend.

Technologies like Docker are maturing rapidly (version 1.0 was just released this summer), and they fit perfectly onto this KVM model. You can containerize these workers for easier deployment, but the underlying metal must be solid.

Conclusion

"Serverless" is a fascinating concept, but you don't need to wait for the future to build decoupled, resilient systems. By combining RabbitMQ, Python/Node, and Supervisord, you can build a processing engine that rivals any cloud offering in terms of power, while maintaining complete cost control and data privacy.

Don't let your application hang while waiting for a third-party API. Offload it.

Ready to build your worker cluster? Deploy a high-performance, KVM-based instance on CoolVDS today. With our local peering at NIX, your latency to Norwegian users is virtually zero.