Console Login

Serverless Patterns on Bare Metal: Escaping Vendor Lock-in with FaaS and Docker Swarm

Serverless Patterns on Bare Metal: Escaping Vendor Lock-in with FaaS and Docker Swarm

Let's get one thing straight: "Serverless" is a marketing term. There are always servers. The only question is whether you control them, or if you're renting them by the millisecond at a 400% markup while praying your cold starts don't timeout your API gateway.

It is April 2017. AWS Lambda has been out for a few years, and Azure Functions is trying to catch up. The hype cycle is peaking. I recently consulted for a fintech startup in Oslo that went "all-in" on public cloud serverless. Their bill was unpredictable, and their latency to the Norwegian banking infrastructure (NICS) was unacceptable because their functions were spinning up in an Irish data center. The physics of light doesn't care about your cloud provider's SLA.

For serious engineering teams, the trend isn't just consuming FaaS (Function-as-a-Service); it's hosting it. By running a FaaS framework on your own high-performance VPS, you gain fixed costs, data sovereignty (crucial with GDPR enforcement looming next year), and sub-millisecond control over your runtime environment. Here is how we build it.

The Architecture: FaaS on Docker Swarm

We don't need the complexity of Kubernetes 1.6 for every project. For a robust, self-healing FaaS cluster, Docker Swarm combined with a framework like OpenFaaS (which is gaining serious traction on GitHub right now) is the pragmatic choice. This setup allows you to deploy functions as Docker containers without managing the underlying boilerplate.

To make this work, you need hardware that doesn't choke on I/O. When you trigger 500 functions simultaneously, you are creating 500 containers. If you are on a budget host using spinning rust (HDD) or shared resources, your system will hang. This is why we deploy on CoolVDS NVMe KVM instances. We need the KVM virtualization to ensure our Docker daemon has full kernel access without the "noisy neighbor" limits of OpenVZ.

1. The Infrastructure Setup

We start with three CoolVDS instances running Ubuntu 16.04 LTS. One manager node, two worker nodes. Private networking is essential here for security.

# On the Manager Node (CoolVDS-01) $ docker swarm init --advertise-addr 10.0.0.1 Swarm initialized: current node (dxn1...) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-49nj1... 10.0.0.1:2377

Once your swarm is active, we deploy the FaaS stack. We are looking for a synchronous invocation pattern where the Gateway routes traffic directly to the function container.

2. Deploying the FaaS Stack

We use a compose file format (v3) to define the stack. This defines the API Gateway, the Prometheus instance for metrics (so we can auto-scale), and the AlertManager.

docker-compose.yml:

version: "3"
services:
  gateway:
    image: functions/gateway:0.7.0
    ports:
      - "8080:8080"
    networks:
      - functions
    deploy:
      placement:
        constraints:
          - 'node.role == manager'

  prometheus:
    image: functions/prometheus:latest
    command: "-config.file=/etc/prometheus/prometheus.yml -storage.local.path=/prometheus -storage.local.memory-chunks=10000"
    ports:
      - "9090:9090"
    networks:
      - functions

networks:
  functions:
    driver: overlay

Deploy this to your CoolVDS cluster:

$ docker stack deploy func --compose-file docker-compose.yml
Pro Tip: Monitor your iowait during deployment. If you see it spike above 5%, your storage is the bottleneck. This is common on standard SSDs. We strictly use the NVMe storage tiers at CoolVDS because the random Read/Write speeds are required when Prometheus is scraping metrics from hundreds of short-lived containers.

3. Writing a Python Function

Forget uploading zip files to a dashboard. We treat functions as Docker images. Here is a simple handler in Python using the FaaS watchdog pattern.

# handler.py
import sys

def handle(req):
    """handle a request to the function
    Args:
        req (str): request body
    """
    return req + " processed by CoolVDS node."

We build this using the CLI and push it to our private registry. The beauty here is that you can include binary dependencies (like imagemagick or ffmpeg) in your Dockerfile, something that is a nightmare on AWS Lambda.

Data Sovereignty and GDPR

We cannot ignore the legal landscape. The General Data Protection Regulation (GDPR) enforcement date is set for May 2018. That is next year. If you are processing personal data of Norwegian citizens, relying on US-based cloud providers adds a layer of compliance complexity (Privacy Shield is shaky).

By hosting your serverless infrastructure on a VPS in Norway, you simplify compliance. You know exactly where the physical drive is located. Datatilsynet (The Norwegian Data Protection Authority) looks favorably on clear data lineage.

Optimizing Nginx for Low Latency

You shouldn't expose the FaaS gateway directly. Put Nginx in front of it. On a CoolVDS instance, we tune the network stack to handle the burst traffic typical of serverless workloads. Default Linux settings are too conservative.

/etc/sysctl.conf adjustments:

# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535

# Allow reusing sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1

# Increase max backlog for incoming connections
net.core.somaxconn = 4096

# Fast Open can reduce latency by one RTT
net.ipv4.tcp_fastopen = 3

Apply these with sysctl -p. These settings are critical when your functions are returning results in 20ms. You don't want the OS blocking connections.

The Economic Argument

Let's talk TCO. A public cloud function with 512MB RAM running for 100ms millions of times a month gets expensive. But the hidden cost is the NAT Gateway and data egress fees.

With a fixed-cost VPS, you pay for the pipe and the core. If you run a CoolVDS instance with 4 vCPUs and 8GB RAM, you can run thousands of invocations per minute for a flat monthly fee. No surprises. For high-throughput workloads—like image processing or real-time event logging—the ROI on "Self-Hosted Serverless" is typically 3-4 months.

Why KVM Matches Serverless

Container isolation relies on kernel namespaces and cgroups. In a nested environment, or a shared kernel environment (like LXC/OpenVZ), you often hit limits on the number of processes or open files (ulimit). KVM provides a dedicated kernel.

When we benchmarked Docker Swarm on CoolVDS KVM versus a competitor's container-based VPS, the difference was stability. The container-based VPS would kill the Docker daemon under load due to resource contention. The KVM instance remained stable because the resources were dedicated, not oversold.

Next Steps

Serverless is an architecture, not a tariff plan. Take back control of your infrastructure. Start by spinning up a KVM instance and installing Docker.

If you want to test the latency difference yourself, deploy a CoolVDS NVMe instance in Oslo. Run a simple ab (Apache Benchmark) against it compared to a Frankfurt-based cloud function. The numbers will speak for themselves.