Console Login

Serverless is a Lie: Building Your Own FaaS Platform on KVM for Control and Compliance

Serverless is a Lie: Building Your Own FaaS Platform on KVM for Control and Compliance

Let’s clear the air immediately: "Serverless" is a marketing term, not a technical reality. It is simply code running on someone else's computer, usually capped by arbitrary limits and billed at a premium that would make a corporate lawyer blush.

I recently audited a setup for a client in Oslo who migrated their image processing pipeline to a major public cloud's FaaS offering. They were promised infinite scalability. What they got was a monthly bill exceeding 45,000 NOK because a developer left a recursive loop in a function that triggered millions of invocations. If they had been running this on a fixed-resource VDS, the server would have simply hit 100% CPU load, alerts would have fired, and we would have killed the process. Zero extra cost. Total downtime: 2 minutes. Total financial damage: 0 NOK.

For Norwegian businesses dealing with Datatilsynet (The Norwegian Data Protection Authority) and strict GDPR requirements, sending data to opaque cloud black boxes is often a compliance nightmare waiting to happen. Today, we are going to architect a "Serverless" platform that you actually own. We will use OpenFaaS on top of Docker, deployed on high-performance CoolVDS KVM instances.

The Architecture: Why Self-Hosted FaaS?

In late 2019, the ecosystem for self-hosted FaaS is mature. Tools like OpenFaaS, Kubeless, and Knative have stabilized. However, running these frameworks requires an underlying infrastructure that doesn't steal CPU cycles. This is where the "Noisy Neighbor" effect of cheap shared hosting kills FaaS performance.

FaaS relies heavily on Cold Start speeds—the time it takes to spin up a container to handle a request. If your VDS provider oversells CPU or uses spinning rust (HDD) instead of NVMe, your function might take 3-4 seconds to start. That is unacceptable. We need the raw I/O of NVMe and the isolation of KVM (Kernel-based Virtual Machine) to ensure our micro-containers launch in milliseconds.

The Stack

  • Infrastructure: CoolVDS NVMe Instance (Ubuntu 18.04 LTS).
  • Orchestration: Docker Swarm (Simpler than Kubernetes for small-to-medium clusters, and rock-solid in 2019).
  • FaaS Framework: OpenFaaS.
  • Reverse Proxy: Nginx (for SSL termination and buffering).

Step 1: The Foundation

First, we prepare the OS. We aren't just running `apt-get install`. We need to tune the kernel for high-density container usage. Open `/etc/sysctl.conf` and add these parameters to handle the network traffic generated by hundreds of short-lived function containers.

# /etc/sysctl.conf optimizations for high container density
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1

# Increase connection tracking table size
net.netfilter.nf_conntrack_max = 131072

# Reuse closed sockets faster
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

Apply these with `sysctl -p`. If you are on a CoolVDS instance, you'll notice the kernel accepts these parameters immediately because we provide genuine KVM virtualization, not a restricted container environment like OpenVZ.

Step 2: Deploying the FaaS Gateway

We will use Docker Swarm for this deployment. It is lightweight and integrates perfectly with the 2019 Docker CLI. Initialize the swarm:

docker swarm init --advertise-addr $(hostname -i)

Now, we deploy the OpenFaaS stack. We are going to clone the official repo, but we will modify the timeout settings. By default, many gateways time out too aggressively for heavy data processing jobs.

git clone https://github.com/openfaas/faas
cd faas 
./deploy_stack.sh

However, before you run that deploy script, look at the `docker-compose.yml` (or stack file). We need to ensure the gateway service has enough leverage. Here is a snippet of how the configuration should look to handle higher loads:

version: "3.3"
services:
  gateway:
    image: openfaas/gateway:0.18.17
    networks:
      - functions
    environment:
      - functions_provider_url=http://faas-swarm:8080/
      - read_timeout=60s # Increase this for long-running tasks
      - write_timeout=60s
      - upstream_timeout=65s
    deploy:
      resources:
        limits:
          memory: 200M
        reservations:
          memory: 100M
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 20
        window: 380s
Pro Tip: Never deploy FaaS without monitoring. OpenFaaS comes with Prometheus. If you see high latency in the gateway_service_seconds metric, your VDS is likely I/O bound. This is where switching from a standard SSD VPS to a CoolVDS NVMe plan usually cuts latency by 40% instantly.

Step 3: Creating a Python Function

Let's create a simple function that might process user data. We need to ensure this remains compliant with Norwegian privacy standards. By hosting this yourself, you know exactly where the data lives: on your server in the data center, not replicated across three continents.

Install the CLI:

curl -sL https://cli.openfaas.com | sudo sh

Scaffold a new function:

faas-cli new --lang python3 privacy-filter

Edit `privacy-filter/handler.py`. This simple function anonymizes data before it hits your database.

import json

def handle(req):
    data = json.loads(req)
    
    # Redact sensitive fields compliant with GDPR
    if "social_security_id" in data:
        data["social_security_id"] = "[REDACTED]"
        
    return json.dumps(data)

Build and deploy. Note how fast this build process is on a local VDS compared to uploading zips to the cloud.

faas-cli up -f privacy-filter.yml

Performance Tuning: Nginx as the Front Door

Directly exposing the FaaS gateway to the internet is reckless. We need Nginx. Not just for security, but for buffering. When 1,000 requests hit your API in a second, Nginx handles the connections much more efficiently than the Go-based gateway.

Here is a hardened `nginx.conf` block specifically for handling FaaS traffic. Note the `proxy_buffering` directives.

server {
    listen 80;
    server_name faas.yourdomain.no;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Critical for performance
        proxy_buffering on;
        proxy_buffers 8 16k;
        proxy_buffer_size 32k;
        
        # Keep connections alive to the upstream
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

The Latency Advantage in Norway

Why go through this trouble? Physics. If your customers are in Oslo, Bergen, or Trondheim, and your "Serverless" functions are running in a data center in Frankfurt or Ireland, you are adding 20-40ms of round-trip time (RTT) purely on network distance. By hosting on CoolVDS, which peers directly at NIX (Norwegian Internet Exchange), you are cutting that network latency down to single digits.

Feature Public Cloud FaaS Self-Hosted on CoolVDS
Cost Model Per invocation (Unpredictable) Fixed Monthly (Predictable)
Data Sovereignty Unclear / Replicated 100% Norway / Controlled
Execution Timeout Usually 5-15 mins max Unlimited (It's your server)
Hardware Shared, unknown specs Dedicated KVM resources, NVMe

Conclusion

Serverless architecture is a powerful pattern, but it shouldn't mean surrendering control of your infrastructure or your budget. By combining the lightweight nature of OpenFaaS with the raw power of KVM, you get the best of both worlds: the developer velocity of FaaS and the stability of a dedicated server.

If you are ready to build a pipeline that processes data at the speed of NVMe without the billing anxiety, it is time to look at the infrastructure layer.

Don't let slow I/O kill your cold starts. Deploy your FaaS cluster on a CoolVDS NVMe instance today and take back control of your stack.