Console Login

Escaping the Cloud Bill: Building a Private Serverless Platform with OpenFaaS in 2019

Escaping the Cloud Bill: Building a Private Serverless Platform with OpenFaaS

The promise of Serverless is seductive. You write code, push it to a black box, and only pay for the milliseconds used. It sounds perfect for the agile team. But ask any CTO who has moved a significant workload to AWS Lambda or Google Cloud Functions about their monthly invoice, and the conversation shifts from "agility" to "cost containment."

Furthermore, we operate in Europe. With the GDPR in full swing since last year, relyng blindly on the US-EU Privacy Shield is a strategic risk many Norwegian companies are hesitant to take. Data sovereignty isn't just a buzzword; it's a legal requirement monitored by Datatilsynet.

There is a pragmatic alternative: Private Serverless. By running a Function-as-a-Service (FaaS) framework on your own Virtual Dedicated Servers (VDS), you regain control over costs, latency, and data residence. This guide explores how to architect a private serverless cluster using Docker Swarm and OpenFaaS on CoolVDS infrastructure.

The Architecture: Why Bare Metal Performance Matters in Virtualization

Serverless functions are ephemeral. They spin up, execute, and die. This churn generates massive I/O overhead. Public clouds hide this latency behind abstraction layers, but often you suffer from "cold starts"—the delay between triggering a function and its execution.

When you build this yourself, the underlying hardware is the bottleneck. Standard HDD VPS solutions will choke on container orchestration. To match public cloud performance, you need:

  • KVM Virtualization: To ensure no resource overcommitment (unlike OpenVZ).
  • NVMe Storage: Essential for rapid container image pulling and layer extraction.
  • Low Latency Network: If your users are in Oslo, your server should be too, not in a Frankfurt datacenter.
Pro Tip: On CoolVDS, we utilize KVM to expose CPU flags directly to the guest. This improves the performance of Go and Python runtimes often used in serverless functions by allowing them to utilize AVX instructions native to the processor.

Step 1: The Foundation (Docker Swarm)

While Kubernetes is winning the container war, for a lean, pragmatic serverless setup in 2019, Docker Swarm remains significantly easier to manage and lighter on resources. We will assume you are running a fresh CoolVDS instance with Ubuntu 18.04 LTS.

First, install the latest Docker engine:

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
usermod -aG docker cooluser

Initialize the Swarm. If you have multiple CoolVDS instances (recommended for high availability), run this on your manager node and use the join token on workers:

docker swarm init --advertise-addr $(hostname -i)

Step 2: Deploying OpenFaaS

OpenFaaS (Functions as a Service) is currently the leading open-source option for running serverless anywhere. It wraps Docker containers in a function logic.

Clone the project and deploy the stack:

git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh

This script executes a docker stack deploy command, spinning up the Gateway, NATS streaming (for async queues), and Prometheus (for auto-scaling metrics). Within 60 seconds, you have a functional serverless platform.

Step 3: Configuring the CLI and Your First Function

Install the CLI tool to interact with your new cluster:

curl -sL https://cli.openfaas.com | sudo sh

Now, let's create a python function. This is where the developer experience mirrors the public cloud, but without the vendor lock-in.

faas-cli new --lang python3 data-processor

This creates a handler.py file. Let's edit it to process some JSON data, a common use case for webhooks:

import json

def handle(req):
    try:
        payload = json.loads(req)
        # Simulate business logic
        result = {
            "status": "processed",
            "user_id": payload.get("id"),
            "region": "NO-West"
        }
    except ValueError:
        return "Invalid JSON"
    
    return json.dumps(result)

Build and deploy this to your local stack. Note that because we are on CoolVDS with NVMe, the build process—which involves disk-heavy I/O operations—is exceptionally fast.

faas-cli build -f data-processor.yml
faas-cli deploy -f data-processor.yml

Step 4: Securing the Gateway

By default, the OpenFaaS gateway runs on port 8080. For a production environment, you must put this behind a reverse proxy with SSL. Nginx is the industry standard here.

Here is a hardened nginx.conf snippet specifically for handling the long-polling connections sometimes required by synchronous function calls:

upstream openfaas {
    server 127.0.0.1:8080;
}

server {
    listen 80;
    server_name functions.yourdomain.no;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name functions.yourdomain.no;

    ssl_certificate /etc/letsencrypt/live/functions.yourdomain.no/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/functions.yourdomain.no/privkey.pem;

    location / {
        proxy_pass http://openfaas;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Critical for long-running functions
        proxy_read_timeout 300s;
        proxy_connect_timeout 300s;
    }
}

The Economic Argument: TCO Analysis

Let's look at the numbers. A high-traffic API on AWS Lambda can easily cost $500+ per month once you exceed the free tier, especially when factoring in API Gateway fees and NAT Gateway charges for VPC access.

Conversely, a CoolVDS instance with 4 vCPUs and 8GB RAM costs a fraction of that. You get predictable billing. You know exactly what your invoice will be at the end of the month. For CFOs and Project Managers, this predictability is invaluable.

Handling State with MinIO

Functions are stateless. But your application isn't. Instead of paying egress fees to S3, you can deploy MinIO on your CoolVDS cluster. It provides an S3-compatible API but keeps the data on your local NVMe storage.

docker service create --name minio \
  --publish 9000:9000 \
  --mount type=bind,source=/mnt/data,target=/data \
  --env MINIO_ACCESS_KEY=cooladmin \
  --env MINIO_SECRET_KEY=supersecret \
  minio/minio server /data

This setup ensures that customer data never leaves your control, simplifying your GDPR compliance strategy significantly.

Conclusion

You do not need to surrender your infrastructure to the hyperscalers to get the benefits of serverless architecture. By combining Docker Swarm, OpenFaaS, and robust CoolVDS hardware, you build a platform that is fast, cost-effective, and legally compliant.

The cloud isn't about who owns the server; it's about how you use it. Take back control of your stack.

Ready to build your private cloud? Deploy a high-performance CoolVDS instance in Oslo today and start your Docker Swarm in under 2 minutes.