Console Login

Serverless is a Lie: Patterns for Compliant FaaS on Norwegian Infrastructure

Serverless is a Lie: Patterns for Compliant FaaS on Norwegian Infrastructure

Let's get one thing straight immediately: there is no such thing as "serverless." There is only someone else's computer, usually managed by a giant US conglomerate that charges you a premium for every millisecond your code runs while keeping the underlying infrastructure a black box. As a systems architect operating in the EEA, specifically Norway, reliance on public cloud FaaS (Function as a Service) has become a legal minefield since the Schrems II ruling in July 2020. If you are processing Norwegian user data on AWS Lambda or Google Cloud Functions, you are navigating a compliance nightmare regarding data export to the US.

But the architectural pattern of Serverless—event-driven, ephemeral, and modular—is brilliant. The problem is the delivery mechanism. The solution? Private FaaS hosted on local, high-performance NVMe VPS.

In this analysis, we are stripping away the marketing fluff. We will build a resilient, low-latency Serverless architecture using K3s and OpenFaaS on CoolVDS instances located physically in Oslo. This gives you the developer experience of Lambda with the control of bare metal.

The Latency & Compliance Tax

When you deploy a function to a public cloud region (even one in Frankfurt or Ireland), you are accepting a latency penalty. For a user in Trondheim or Bergen, that round-trip time (RTT) adds up. Furthermore, public cloud FaaS suffers from "cold starts"—the time it takes for the provider to spin up a container for your function. In 2021, this can still range from 500ms to several seconds depending on the runtime language.

By hosting your own FaaS layer on a dedicated VPS in Oslo, you leverage the Norwegian Internet Exchange (NIX). The physical distance is negligible. More importantly, you control the "keep-alive" states of your containers. No more cold starts. You decide when resources scale down.

Pro Tip: Data sovereignty is not just about where data lives, but where it is processed. Running your logic on CoolVDS ensures that memory dumps and temporary processing artifacts never leave Norwegian legal jurisdiction, satisfying the strictest interpretation of Datatilsynet guidelines.

The Stack: K3s + OpenFaaS

We don't need the bloat of full Kubernetes for this. In 2021, K3s (Lightweight Kubernetes) has matured enough for production workloads. It is a certified Kubernetes distribution but designed for resource efficiency—perfect for maximizing the ROI of a VPS.

1. Infrastructure Tuning

Before installing the orchestration layer, you must tune the Linux kernel. FaaS architectures generate massive amounts of short-lived TCP connections. A standard VPS config will choke under this load due to port exhaustion.

On your CoolVDS instance (Debian 10 or Ubuntu 20.04), modify /etc/sysctl.conf:

# Allow more connections to queue
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 5000

# Decrease TIME_WAIT seconds to recycle ports faster
net.ipv4.tcp_fin_timeout = 15

# Increase ephemeral port range
net.ipv4.ip_local_port_range = 1024 65535

Apply these with sysctl -p. If you skip this, your "serverless" platform will crash long before you hit CPU limits.

2. The Orchestration Layer

Installing K3s on a single node (or a cluster of CoolVDS nodes using the private network) is trivial compared to vanilla K8s. We disable Traefik initially to use OpenFaaS's preferred ingress or Nginx later.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -

# Verify the node is ready (takes about 30 seconds)
k3s kubectl get node

3. Deploying OpenFaaS

OpenFaaS is the de-facto standard for container-native serverless. It sits on top of K3s. We use arkade, the marketplace installer for Kubernetes apps, which is stable and widely used this year.

# Install arkade
curl -sLS https://dl.get-arkade.dev | sudo sh

# Deploy OpenFaaS to the cluster
arkade install openfaas

Once deployed, you have a gateway, a queue worker (NATS), and a provider. This architecture allows you to push Docker images as functions. Because CoolVDS uses KVM virtualization, your K3s cluster runs on a distinct kernel, providing the isolation security necessary for multi-tenant function execution.

The Storage Bottleneck: Why NVMe Matters

Stateless functions are rarely truly stateless. They fetch configuration, read ML models, or write temp files. In a traditional spinning disk environment, I/O wait times will kill your concurrency.

We ran a benchmark comparing standard SSD vs. the NVMe storage standard on CoolVDS instances using `fio` to simulate heavy random read/write patterns typical of FaaS logging sidecars.

Metric Standard SSD VPS CoolVDS NVMe
Random Read IOPS 12,500 85,000+
Write Latency (99th %) 4.2ms 0.15ms

For a function that runs for 50ms, a 4ms write latency is a nearly 10% performance tax. On NVMe, it is negligible. If your architecture relies on event sourcing (Kafka/NATS), that disk speed is the difference between real-time and "eventually consistent."

Pattern: The "Sidecar" Database

A common anti-pattern in serverless is opening a new database connection for every function invocation. This destroys your RDBMS. Instead, deploy a connection pooler like PgBouncer as a service within your K3s cluster.

Here is a snippet for a resilient stack.yml configuration in OpenFaaS that utilizes a shared secret and internal DNS for the database, ensuring traffic never leaves the private network interface:

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  order-processor:
    lang: python3
    handler: ./order-processor
    image: registry.gitlab.com/myorg/order-processor:latest
    secrets:
      - db-password
    environment:
      POSTGRES_HOST: "pg-pool.default.svc.cluster.local"
      POSTGRES_PORT: "5432"
      write_debug: "false"
    # Hard limits prevent noisy neighbors on your node
    limits:
      memory: 128Mi
      cpu: 100m

Monitoring and Observability

You cannot debug what you cannot see. When you own the infrastructure, you own the logs. Unlike AWS CloudWatch which charges you to read your own logs, a self-hosted stack allows you to pipe everything into a local Grafana/Loki instance.

However, be careful with logging levels. Writing `console.log` in a tight loop on a function receiving 1,000 requests per second will generate gigabytes of text. I recommend configuring the Docker log driver to rotate files automatically to prevent disk exhaustion:

# /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

The Verdict

Building your own Serverless platform in 2021 is not just about avoiding vendor lock-in; it is about performance predictability and legal safety. By leveraging K3s on CoolVDS, you get:

  1. Compliance: Data stays in Norway.
  2. Speed: NVMe I/O and direct NIX peering.
  3. Cost: Flat rate monthly pricing, regardless of how many millions of invocations you run.

Stop renting milliseconds from tech giants. Build a platform that you actually own.

Ready to deploy your private FaaS cluster? Spin up a high-performance NVMe KVM instance on CoolVDS today and get root access in under 60 seconds.