Console Login

Serverless Without the Lock-in: Portable FaaS Patterns for Nordic Infrastructure

Beyond the Hype: Building Portable Serverless Architectures in a Post-Schrems II World

Let’s clear the air: "Serverless" is a marketing term. There are always servers. The only question is whether you control them, or if you're renting them by the millisecond at a 400% markup while praying your data doesn't accidentally route through a jurisdiction that makes Datatilsynet (The Norwegian Data Protection Authority) nervous.

As a Systems Architect operating out of Oslo, I've watched too many engineering teams fall into the "hyperscaler trap." They start with a few functions on AWS Lambda or Azure Functions. It's cheap at first. Then, traffic scales. Suddenly, the bill arrives, and that "cost-effective" architecture is burning through the budget faster than a heater on a balcony in Tromsø.

Even worse is the latency. If your users are in Norway, but your functions are cold-starting in a Frankfurt data center, you are fighting physics. You are losing.

Today, we are going to look at a pragmatic, battle-tested architecture: Self-Hosted Serverless. We will use standard open-source tools (K3s, OpenFaaS) running on high-performance KVM instances (like CoolVDS) to get the developer experience of serverless with the cost predictability and data sovereignty of a Virtual Private Server.

The Architecture: Why "Bring Your Own Metal"?

In November 2023, the trend in DevOps is repatriation. We are moving workloads back from the cloud edge to predictable infrastructure. The pattern we use is Containerized FaaS.

Instead of relying on a proprietary runtime, we deploy a lightweight Kubernetes cluster on a VDS. On top of that, we run an open-source FaaS framework. This gives us:

  • Zero Vendor Lock-in: The same code runs on CoolVDS, your laptop, or a bare-metal rack.
  • NVMe Performance: Public cloud functions often suffer from "noisy neighbors" and slow disk I/O. On a dedicated KVM slice with NVMe, disk latency is negligible.
  • Fixed Costs: You pay for the VDS resources, not the invocation count. If a function goes into a recursive loop, your server hits 100% CPU, but your bank account doesn't drain.

The Stack

  • Infrastructure: CoolVDS NVMe Instance (Ubuntu 22.04 LTS).
  • Orchestration: K3s (Lightweight Kubernetes).
  • FaaS Framework: OpenFaaS (Standard Edition).
  • Ingress: Traefik or Nginx.

Step 1: The Foundation (System Tuning)

Before installing K8s, we need to prep the OS. Default Linux settings are conservative. For a high-throughput FaaS environment, we need to open up the file descriptors and optimize the network stack. I've seen default configs choke on as few as 50 concurrent function invocations.

Run this on your CoolVDS instance:

# /etc/sysctl.d/99-k8s-networking.conf

# Increase the limit of open file descriptors
fs.file-max = 2097152

# Optimize ARP cache for high density pods
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 16384

# Enable IP forwarding (Required for K8s CNI)
net.ipv4.ip_forward = 1

# BBR Congestion Control for better throughput
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

Apply these with sysctl --system. If you skip this, your latency will spike unpredictably under load.

Step 2: Deploying the Lightweight Cluster

We don't need the bloat of full K8s. K3s is a binary less than 100MB that implements the Kubernetes API. It allows us to turn a single CoolVDS instance into a powerful cluster controller.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -

Note: We disable the default Traefik because we want to control the ingress manually for granular rate limiting later.

Pro Tip: On CoolVDS, because we use genuine KVM virtualization, you don't need to worry about the specific kernel modules missing that often plagues container-based VPS providers (LXC/OpenVZ). K3s runs natively without hacks.

Step 3: Installing OpenFaaS

OpenFaaS abstracts Docker and Kubernetes away. It gives you a simple gateway to deploy functions. We use arkade, a CLI marketplace for K8s apps, to install it cleanly.

# Install arkade
curl -sLS https://get.arkade.dev | sudo sh

# Install OpenFaaS
arkade install openfaas

Once installed, you’ll get a generated password. Save it. You are now the CIO of your own serverless platform.

Step 4: The War Story – Handling "The Spike"

I recall a project for a Norwegian media outlet during the 2021 elections. They used a public cloud function to process image uploads. When the exit polls hit, traffic surged 4000%. The cloud provider scaled beautifully, but the cold starts (the time it takes to spin up a new container) caused 3-second delays on the frontend. The users thought the site was down.

We moved the workload to a high-frequency compute VDS. Because we controlled the environment, we could keep the function containers "warm" without paying extra per second. We set the readinessProbe and livenessProbe aggressively to ensure containers were always ready to accept TCP connections.

Here is how you configure a function in OpenFaaS to handle high I/O (like image processing) by leveraging the underlying NVMe storage of CoolVDS:

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  image-resizer:
    lang: python3-http
    handler: ./image-resizer
    image: my-registry/image-resizer:latest
    environment:
      write_debug: true
      read_timeout: 10s
      write_timeout: 10s
    # The magic sauce for performance:
    annotations:
      com.openfaas.scale.min: 2  # Always keep 2 replicas warm
      com.openfaas.scale.max: 20
    limits:
      memory: 256Mi
      cpu: 500m
    requests:
      memory: 128Mi
      cpu: 100m

Latency Matters: The Norwegian Context

Physics is stubborn. If your data center is in Virginia (us-east-1) and your user is in Bergen, the round-trip time (RTT) is roughly 90-110ms. That is before your server processes a single byte.

Hosting on CoolVDS in a local data center drops that RTT to single digits (often <10ms within Norway). For interactive applications or API-heavy microservices, this latency reduction is more valuable than any code optimization you can write.

Benchmarking I/O

Don't take my word for it. Run a simple fio test on your current hosting environment. If you aren't seeing IOPS in the tens of thousands, your "Serverless" database functions will bottleneck.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75

On our CoolVDS NVMe setups, this typically flies. On standard shared cloud hosting? You might want to grab a coffee while it finishes.

Security & Compliance (GDPR)

Under Schrems II, transferring personal data to US-owned cloud providers is legally complex. By running OpenFaaS on a European provider like CoolVDS, you simplify your compliance posture. You know exactly where the physical disk resides. You control the encryption keys. There is no opaque "managed service" layer that might be logging payload data.

Conclusion

Serverless is an architectural pattern, not a billing model. You don't need AWS Lambda to build event-driven systems. You need a container orchestrator and a robust virtualization layer.

By building on top of K3s and CoolVDS, you gain:

  1. Predictable Billing: No surprise invoices.
  2. Data Sovereignty: Keep your data in Norway/Europe.
  3. Raw Performance: Direct access to NVMe I/O and KVM isolation.

Don't let latency or legal gray areas compromise your architecture. Spin up a CoolVDS instance today, install K3s, and own your infrastructure again.