Console Login

Serverless Sovereignty: Architecting Self-Hosted FaaS to Survive Schrems II

Serverless Sovereignty: Architecting Self-Hosted FaaS to Survive Schrems II

Let's be honest: AWS Lambda and Google Cloud Functions are technically impressive. They are also a trap. You start with free tier credits, and six months later you are staring at a bill that makes your CFO weep, locked into a proprietary ecosystem that makes migration impossible. But in 2021, the bigger problem isn't cost—it's legal survival.

Since the CJEU struck down the Privacy Shield (Schrems II) last July, sending Norwegian user data to US-controlled cloud providers has become a game of Russian Roulette with Datatilsynet. If your serverless function processes PII (Personally Identifiable Information) on a server owned by a US entity, you are technically non-compliant. Period.

The solution isn't to abandon the serverless architecture pattern. It's to own the metal it runs on. By deploying a Function-as-a-Service (FaaS) platform on high-performance KVM instances located in Oslo, you keep latency low and legal risks lower. Here is the battle-tested architecture we are using right now.

The Stack: K3s + OpenFaaS on CoolVDS

We don't need the bloat of full upstream Kubernetes for this. K3s is a lightweight, certified Kubernetes distribution that removes the legacy cloud provider add-ons. It is perfect for turning a robust VPS into a function runner. On top, we layer OpenFaaS, which provides the event-driven triggers and autoscaling logic.

Why KVM? Containers share the host kernel. If you run FaaS on OpenVZ or LXC containers (which many budget hosts sell as "VPS"), you will run into cgroup limitations and security boundaries. You need hardware virtualization. CoolVDS uses KVM, meaning your kernel is yours. No noisy neighbors stealing CPU cycles when your function needs to cold-start.

Step 1: The Foundation

Start with a fresh Debian 10 or Ubuntu 20.04 LTS instance. Before touching Docker, we need to tune the kernel for high-throughput networking. Default Linux settings are conservative.

# /etc/sysctl.d/99-k8s-networking.conf

# Enable IP forwarding (mandatory for CNI plugins)
net.ipv4.ip_forward = 1

# Increase the connection tracking table for high concurrent function invocations
net.netfilter.nf_conntrack_max = 131072

# Optimize for low latency
net.ipv4.tcp_fastopen = 3
net.core.somaxconn = 1024

Apply these with sysctl --system. If you skip the connection tracking bump, your FaaS gateway will drop packets during load spikes. I've seen it happen during Black Friday sales.

Step 2: Deploying K3s

Install K3s. We disable Traefik here because we want to configure the Ingress manually later for stricter control.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -

Verify your node is ready. It should take about 30 seconds on a CoolVDS NVMe instance. If it takes longer, check your disk I/O wait times.

k3s kubectl get node -o wide

Step 3: Installing OpenFaaS

We use arkade, a CLI tool that simplifies installing apps to Kubernetes. It saves writing 500 lines of YAML manually.

# Install arkade
curl -sLS https://dl.get-arkade.dev | sudo sh

# Install OpenFaaS
arkade install openfaas

Once deployed, port-forward the gateway to your local machine (or expose it via a NodePort if you are behind a private firewall) to verify access.

kubectl rollout status -n openfaas deploy/gateway
kubectl port-forward -n openfaas svc/gateway 8080:8080 &

The "Cold Start" optimization

The enemy of serverless is the cold start—the time it takes to spin up a container from zero. In a public cloud, you have zero control over this. On your own infrastructure, you can cheat.

By leveraging the NVMe storage on CoolVDS, image pull times are negligible. However, we can go further by tweaking the faas-netes configuration to keep "warm" replicas.

Pro Tip: Set com.openfaas.scale.min to 1 for critical functions. This defeats the purpose of "scale to zero" cost-savings, but ensures sub-10ms response times. For background workers (image processing, PDF generation), let it scale to zero.

Here is a sample stack.yml for a Python function that handles GDPR deletion requests:

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  gdpr-handler:
    lang: python3
    handler: ./gdpr-handler
    image: registry.coolvds.internal/gdpr-handler:latest
    labels:
      com.openfaas.scale.min: "1"
      com.openfaas.scale.max: "15"
    environment:
      write_debug: true
      read_timeout: 10s
      write_timeout: 10s
    limits:
      memory: 128Mi
      cpu: 100m

Compliance & Data Sovereignty

This is where the architecture pays for itself. When you deploy this on a VPS in Norway, you can strictly define where data flows. Unlike AWS, where "Region: eu-north-1" still implies metadata replication to US servers (read the fine print), a self-hosted OpenFaaS instance on CoolVDS is sovereign.

Feature Public Cloud FaaS Self-Hosted (CoolVDS)
Data Location Opaque (Replication happens) Strict (Oslo Only)
Hardware Access None Full Kernel/Sysctl control
Latency to NIX Variable < 2ms (Direct Peering)
Cost at Scale Linear / Unpredictable Flat Rate

Monitoring with Prometheus

You cannot improve what you cannot measure. OpenFaaS comes with Prometheus built-in. Use it. Watch the gateway_function_invocation_total metric.

If you see latency creeping up, check the CPU Steal time on your VPS. This is the noisy neighbor effect I mentioned earlier. We strictly monitor this at CoolVDS to ensure our host nodes are never oversold on CPU threads, but if you are migrating from a budget provider, this is usually the culprit for erratic function performance.

Conclusion

Serverless is an architectural pattern, not a product you buy from Amazon. By decoupling the pattern from the provider, you gain performance, predictability, and compliance. The technology stack—K3s and OpenFaaS—is mature enough in 2021 to run production workloads without a dedicated ops team.

Don't let legal uncertainty paralyze your development. Spin up a high-frequency NVMe instance, deploy K3s, and keep your data where it belongs.