Console Login

Demystifying Serverless: Implementing Event-Driven Patterns on Bare Metal & KVM Without Vendor Lock-in

The "Serverless" Lie: Why Your Infrastructure Still Matters

Let’s clear the air. "Serverless" is a marketing term. There are always servers. The only question is: do you control them, or do you rent them by the millisecond at a 400% markup? I’ve spent the last six months migrating a client back from a major public cloud FaaS provider to a dedicated KVM cluster. Why? Because when their traffic spiked during the Norwegian Constitution Day sales, their "infinite scale" came with an infinite bill and cold-start latencies hitting 3 seconds. Unacceptable.

For developers in Oslo and across Europe, the appeal of functions-as-a-service (FaaS) is real. You push code, it runs. But for the System Architect looking at the long game, the lack of control over the OS kernel, the inability to tune sysctl.conf, and the terrifying concept of data leaving the EEA makes pure public cloud FaaS a risky bet.

We can achieve the same "fire-and-forget" utility without the handcuffs. By deploying event-driven patterns on high-performance Virtual Dedicated Servers (VDS), we get the agility of serverless with the raw power of bare metal. Let's look at how to build this stack properly in 2017.

The Core Constraint: Disk I/O in Container Architectures

Whether you use Docker Swarm, Kubernetes (if you're brave enough to run v1.6), or simple shell scripts, ephemeral computing relies heavily on image pulling and container creation. If your underlying storage is spinning rust (HDD) or shared SATA SSDs, your "serverless" function will choke before it starts.

Pro Tip: Never run containerized event architectures on standard IOPS storage. The bottleneck is almost always iowait. We benchmarked this: switching from standard SSD to NVMe on CoolVDS reduced our Docker container start times from 1.2s to 0.3s. That is the difference between a snappy API and a timeout.

Pattern 1: The "Poor Man's Lambda" (RabbitMQ + Workers)

The most robust serverless pattern isn't an HTTP function; it's the worker queue. It decouples the web tier from the processing tier. This is critical for compliance with upcoming regulations like the GDPR (General Data Protection Regulation) looming for 2018—you want to ensure data is processed in a controlled environment, not scattered across opaque cloud zones.

We use RabbitMQ as the broker. It’s stable, Erlang-based, and fast. Here is a battle-tested configuration for /etc/rabbitmq/rabbitmq.config to ensure durability without sacrificing too much speed:

[
  {rabbit, [
    {tcp_listeners, [5672]},
    {vm_memory_high_watermark, 0.7},
    {disk_free_limit, {mem_relative, 1.0}},
    {hipe_compile, true}
  ]}
].

The hipe_compile flag is often overlooked, but on the newer CoolVDS KVM instances, it compiles Erlang code to native machine code, boosting message throughput by roughly 20-30%.

The Worker Implementation

Instead of a proprietary Lambda function, write a Python worker using pika. This runs persistently on your VPS, eliminating the "cold start" penalty entirely. It consumes almost zero CPU when idle.

import pika
import time
import os

# Connect to localhost - keep traffic internal and fast
connection = pika.BlockingConnection(
    pika.ConnectionParameters(host='localhost')
)
channel = connection.channel()

channel.queue_declare(queue='image_process_task', durable=True)

def callback(ch, method, properties, body):
    print(" [x] Received %r" % body)
    # Simulate heavy processing (e.g., ImageMagick)
    time.sleep(body.count(b'.')) 
    print(" [x] Done")
    ch.basic_ack(delivery_tag=method.delivery_tag)

channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='image_process_task')

print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

Pattern 2: The Gateway Router (Nginx + Lua)

Sometimes you need synchronous responses (HTTP). In the public cloud, you pay for an API Gateway. On your own infrastructure, you use Nginx. It is faster, cheaper, and you can debug it.

To mimic the routing capabilities of a serverless platform, we can use Nginx to route traffic to different local ports or sockets where lightweight containers are listening. This configuration handles high concurrency and keeps connections alive, reducing the TCP handshake overhead to our internal services.

http {
    upstream microservice_auth {
        server 127.0.0.1:8001;
        keepalive 64;
    }

    upstream microservice_resize {
        server 127.0.0.1:8002;
        keepalive 64;
    }

    server {
        listen 80;
        server_name api.yourservice.no;

        location /auth {
            proxy_pass http://microservice_auth;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header X-Real-IP $remote_addr;
        }

        location /resize {
            proxy_pass http://microservice_resize;
            # Buffer tuning for larger payloads
            client_body_buffer_size 10K;
            client_max_body_size 8m;
        }
    }
}

Notice the keepalive 64 in the upstream block. Without this, Nginx opens a new connection to your backend service for every request, wasting file descriptors and CPU cycles. On a high-traffic site, this simple change dropped our load average by 40%.

The Infrastructure Reality: KVM vs. Containers

There is a misconception that you should run containers directly on bare metal. In 2017, the tooling just isn't secure enough for multi-tenant bare metal yet. The sweet spot is KVM Virtualization.

KVM provides a hard kernel boundary. Inside that KVM instance (which CoolVDS provides), you can run Docker with the overlay2 storage driver (don't use aufs anymore, it's deprecated and slow). This gives you the security of a VM with the deployment speed of containers.

Feature Public Cloud FaaS CoolVDS (KVM + Docker)
Latency Variable (Cold starts) Consistent (Always on)
Data Location Opaque Region Oslo/Europe (Guaranteed)
Cost Model Per Request (Unpredictable) Fixed Monthly (Predictable)
OS Access None Full Root

Latency and The Nordic Context

If your users are in Oslo, Bergen, or Trondheim, routing traffic through a massive data center in Frankfurt or Ireland adds 30-50ms of round-trip time. That is physics. By hosting your event-driven architecture on local nodes, you cut that latency down to single digits.

Furthermore, reliability is paramount. The Norwegian power grid is stable, but internet routing can be fickle. Using a provider with direct peering at NIX (Norwegian Internet Exchange) ensures your API responses don't take a scenic tour of Scandinavia before reaching the user.

Conclusion

Serverless is a powerful architectural concept, but it doesn't require surrendering your infrastructure to the giants. By leveraging RabbitMQ for async tasks and Nginx for synchronous routing on top of robust KVM instances, you build a system that is faster, cheaper, and legally safer.

Stop worrying about the "cloud bill hangover." Take control of your stack. Deploy a high-performance NVMe instance on CoolVDS today and see what your code runs like when it's not fighting for resources.