Console Login

Serverless Patterns Without the Bill Shock: Self-Hosted FaaS on KVM

Serverless Patterns Without the Bill Shock: Self-Hosted FaaS on KVM

"Serverless" is the greatest marketing lie of the last decade. There are always servers. The only difference is whether you control them or if you're renting them by the millisecond at a 400% markup while praying your cold starts don't time out the API gateway. If you are building event-driven architectures in 2024, you have likely hit the wall where the convenience of AWS Lambda or Azure Functions is eclipsed by unpredictable billing and the Schrems II legal headache.

In the Nordics, specifically, relying on US-centric hyperscalers for critical data processing is becoming a liability. Datatilsynet (The Norwegian Data Protection Authority) has been clear about data sovereignty. The pragmatic solution? Bring the serverless pattern to your own infrastructure.

This guide breaks down how to implement a resilient, self-hosted FaaS (Function as a Service) architecture using OpenFaaS and K3s on high-performance KVM instances. We gain the developer velocity of serverless without losing the raw I/O performance of the metal.

The Architecture: Why KVM beats Shared Containers

Public cloud functions run in heavily multi-tenant environments. You are fighting for CPU cycles with thousands of other users. This causes "noisy neighbor" latency spikes. For a consistent event-driven system, we need isolation.

We use CoolVDS here as the reference implementation because effective FaaS requires two things: low-latency NVMe storage (for rapid container image pulling) and dedicated CPU time. If your underlying hypervisor steals cycles, your function execution time drifts, and your synchronous APIs hang.

The Stack

  • Infrastructure: CoolVDS KVM Instances (Ubuntu 22.04 LTS)
  • Orchestration: K3s (Lightweight Kubernetes, perfect for edge/VPS)
  • FaaS Framework: OpenFaaS (Standard, container-centric)
  • Ingress: Traefik (bundled with K3s)

Step 1: The Base Layer Optimization

Before installing Kubernetes, we must tune the kernel for container workloads. The default Linux networking stack is too conservative for high-churn pod creation.

SSH into your CoolVDS instance and apply these sysctl params. We are optimizing for high connection counts and rapid TCP reuse, essential when functions scale to zero and back up rapidly.

# /etc/sysctl.d/99-k8s-faas.conf

# Increase max open files
fs.file-max = 2097152

# Optimize ARP cache for dense networks
net.ipv4.neigh.default.gc_thresh1 = 128
net.ipv4.neigh.default.gc_thresh2 = 512
net.ipv4.neigh.default.gc_thresh3 = 1024

# Allow more connections
net.core.somaxconn = 32768
net.ipv4.ip_local_port_range = 1024 65000

# Fast recycling of TIME_WAIT sockets (use with caution, but necessary for high throughput FaaS)
net.ipv4.tcp_tw_reuse = 1

Apply them:

sysctl --system

Step 2: Deploying the Control Plane

We avoid full-blown K8s (kubeadm) because it eats too much RAM on smaller nodes. K3s is a CNCF-certified binary that uses half the memory. On a CoolVDS instance locally peered in Oslo, the installation takes about 25 seconds.

curl -sfL https://get.k3s.io | sh -

# Verify the node is ready (CoolVDS NVMe usually initializes this instantly)
sudo k3s kubectl get node

Once the node is Ready, we grab the kubeconfig to manage it remotely or simply run commands on the box.

Step 3: Installing OpenFaaS

We use arkade, a CLI tool that simplifies Helm chart installations for OpenFaaS. It handles the nitty-gritty of service accounts and RBAC.

# Install arkade
curl -sLS https://get.arkade.dev | sudo sh

# Install OpenFaaS core services
arkade install openfaas

# Check the rollout status
kubectl rollout status -n openfaas deploy/gateway
Pro Tip: Public cloud FaaS charges for egress. CoolVDS provides generous bandwidth limits. If you are processing images or video (e.g., transcoding with ffmpeg), the cost difference is often 10x in favor of VPS hosting.

Step 4: Deploying a Python Function

Let's create a function that actually does something useful. We'll build a simple image resizer. This is a classic I/O heavy task that exposes the weakness of slow disk I/O on cheap hosting.

First, install the FaaS CLI:

curl -sL https://cli.openfaas.com | sudo sh

Pull the python template and create the function skeleton:

faas-cli template store pull python3-http
faas-cli new --lang python3-http image-resizer --prefix=registry.yourdomain.com

Edit image-resizer/handler.py:

import io
from PIL import Image

def handle(event):
    if event.method != "POST":
        return {"statusCode": 405, "body": "Method not allowed"}

    try:
        # In a real scenario, event.body would be bytes or a URL
        # This simulates the CPU work of resizing
        image_data = io.BytesIO(event.body)
        image = Image.open(image_data)
        
        # Resize to thumbnail
        image.thumbnail((128, 128))
        
        out_buffer = io.BytesIO()
        image.save(out_buffer, format="JPEG")
        
        return {
            "statusCode": 200,
            "headers": {"Content-Type": "image/jpeg"},
            "body": out_buffer.getvalue()
        }
    except Exception as e:
        return {"statusCode": 500, "body": str(e)}

Deploying this to your cluster:

faas-cli up -f image-resizer.yml

When you hit this endpoint, the container spins up. On CoolVDS, thanks to the NVMe backing the Docker registry cache, the "cold start" is virtually imperceptible compared to the 200-500ms lag on AWS Lambda.

Comparing Architectures: Cloud vs. VPS

Why move away from fully managed? It comes down to the TCO (Total Cost of Ownership) and Performance stability.

Feature Public Cloud FaaS (Lambda/Azure) CoolVDS + OpenFaaS
Execution Time Limit Strict (usually 15 mins) Unlimited
Disk I/O Network /tmp (Slow) Local NVMe (Blazing Fast)
Data Sovereignty US Cloud Act Risk Strict Norway/EU Compliance
Cost Model Per request + GB/s Flat monthly rate

The Latency Advantage in Norway

If your users are in Oslo, Bergen, or Trondheim, routing traffic to Frankfurt or Ireland (common AWS regions) adds 20-40ms of round-trip time. By hosting your FaaS cluster on CoolVDS in Norway, you utilize the NIX (Norwegian Internet Exchange) infrastructure.

Low latency isn't just about speed; it's about the "snappiness" of the UI. When a user uploads a profile picture, they expect an instant crop-and-resize operation. Milliseconds matter.

Conclusion

Serverless is a pattern, not a product code. By decoupling your application logic into functions but hosting them on robust, single-tenant KVM VPS infrastructure like CoolVDS, you regain control. You stop worrying about unexpected bills at the end of the month and start focusing on code efficiency.

Don't let slow I/O kill your application's performance. Deploy a high-performance K3s cluster on CoolVDS today and experience the difference true NVMe throughput makes.