Console Login

Serverless Without the Lock-in: Implementing Compliant FaaS Architectures on Norwegian Infrastructure

Serverless Without the Lock-in: Implementing Compliant FaaS Architectures on Norwegian Infrastructure

Let’s clear the air immediately: "Serverless" is a marketing term. There are always servers. The only variables changing are who manages them, how much you pay for that privilege, and—crucially for us operating in the EEA—who has legal access to the data residing on them.

As a CTO, I see the appeal of AWS Lambda or Azure Functions. The promise of infinite scalability and zero infrastructure management is seductive. But in 2022, the reality for European businesses is far more complex. Following the Schrems II ruling, relying on US-owned hyperscalers for processing sensitive Norwegian user data puts you in a legal grey zone regarding GDPR and data transfers. Datatilsynet (The Norwegian Data Protection Authority) has been clear: risk assessments are mandatory, and data sovereignty is not optional.

Furthermore, the cost curve of public cloud serverless functions is linear and unforgiving. At low volume, it's free. At high volume, it destroys margins. The pragmatic solution? Private Serverless.

This architecture allows you to run event-driven functions on your own infrastructure, maintaining total data sovereignty in Norway while leveraging the cost-efficiency of fixed-price VPS resources. Here is how we build it using CoolVDS, Kubernetes (k3s), and OpenFaaS.

The Architecture: Hybrid FaaS

We aren't rebuilding AWS. We are building a functional execution layer. The stack looks like this:

  • Infrastructure: CoolVDS High-Performance NVMe Instance (Ubuntu 22.04 LTS).
  • Orchestration: k3s (Lightweight Kubernetes).
  • FaaS Framework: OpenFaaS (Function as a Service).
  • Ingress: Traefik or Nginx.

Why dedicated VPS over managed K8s? IOPS. Serverless relies on rapid container creation and destruction. If your underlying storage is networked block storage (common in managed clouds), the latency kills your cold-start times. With CoolVDS local NVMe storage, disk I/O is practically instantaneous.

Step 1: The OS Layer Preparation

Before installing orchestration layers, we must tune the Linux kernel for high container density. A standard VPS config is designed for general purpose, not for churning through thousands of ephemeral containers.

SSH into your instance and modify your sysctl configuration. We need to increase the limits for file watchers and memory maps.

# /etc/sysctl.d/99-kubernetes-cri.conf

# Allow more pods
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# Increase file watchers for heavy container usage
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512

# Essential for database/logging containers (Elasticsearch/Redis)
vm.max_map_count = 262144

Apply these changes:

sudo sysctl --system
Pro Tip: If you are running high-traffic workloads, check the `coolvds` dashboard to ensure CPU Steal is at 0%. Our KVM architecture guarantees dedicated resources, but on oversold budget hosts, "noisy neighbors" will cause random 500ms delays in your function execution. For serverless, consistent CPU cycles are more important than raw clock speed.

Step 2: Lightweight Orchestration with k3s

Full Kubernetes (kubeadm) is overkill for a single-node or small-cluster FaaS setup. We use k3s, a CNCF-certified Kubernetes distribution that is highly efficient. It strips away legacy cloud provider drivers, reducing the memory footprint significantly.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -

I deliberately disable the default Traefik controller here because we want granular control over our Ingress later, perhaps using NGINX or a custom Traefik config optimized for rate-limiting.

Verify your node is ready:

sudo k3s kubectl get node
# NAME          STATUS   ROLES                  AGE   VERSION
# coolvds-node  Ready    control-plane,master   35s   v1.24.4+k3s1

Step 3: Deploying OpenFaaS

OpenFaaS is the industry standard for container-native serverless. It’s simple, robust, and doesn't require learning complex CRDs (Custom Resource Definitions) immediately. We will use `arkade`, a marketplace tool for K8s, to install it.

# Install arkade
curl -sLS https://get.arkade.dev | sudo sh

# Install OpenFaaS
arkade install openfaas

This command sets up the `openfaas` namespace, the gateway, and the queue worker. Once installed, retrieve your password:

kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo

Step 4: The "Cold Start" Optimization

The enemy of serverless is the "Cold Start"—the time it takes to spin up a container when a request hits a dormant function. On AWS Lambda, you have zero control over this. On your own infrastructure, you have total control.

We mitigate this using OpenFaaS Profiles to enforce keep-alive strategies and by leveraging the raw NVMe speed of the CoolVDS platform.

Here is a sample `stack.yml` for a Python function that handles GDPR data anonymization. Note the annotations.

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080

functions:
  anonymize-user:
    lang: python3-http
    handler: ./anonymize-user
    image: docker.io/myorg/anonymize-user:latest
    annotations:
      # Keep a hot copy ready to avoid cold starts
      com.openfaas.scale.min: 1
      com.openfaas.scale.max: 20
    limits:
      memory: 128Mi
      cpu: 100m
    requests:
      memory: 64Mi
      cpu: 50m

By setting `com.openfaas.scale.min: 1`, we trade a small amount of RAM (cheap on our VPS) for zero-latency execution. This is a trade-off you cannot easily make financially on public clouds without purchasing "Provisioned Concurrency" at a premium.

Data Sovereignty & Persistence

If your function needs to store state, do not write to the container filesystem. Connect to a database running on the same private network. For a Norwegian e-commerce site, for example, we might run a PostgreSQL instance on a separate CoolVDS node, connected via a private VLAN.

This setup ensures:

  1. Latency: <1ms between function and database.
  2. Compliance: Data never leaves the Oslo datacenter region.
  3. Security: The database is not exposed to the public internet, only to the K8s cluster IP.

Performance Benchmarks: Public Cloud vs. CoolVDS NVMe

We ran a simple prime-number calculation function (CPU intensive) on a standard public cloud FaaS offering and our CoolVDS OpenFaaS implementation. The results from our September 2022 tests were illuminating.

Metric Public Cloud FaaS (128MB) CoolVDS (NVMe KVM)
Cold Start Time ~350ms - 1200ms ~80ms - 150ms
Execution Consistency Variable (Noisy Neighbors) Stable
Cost per 1M Requests $0.20 - $0.90 (varies) Flat Rate (VPS Cost)

Conclusion: Take Back Control

Building your own serverless platform in 2022 is no longer a science project; it is a strategic business decision. It insulates you from price hikes, shields you from regulatory scrutiny regarding data transfers to the US, and provides performance that generic cloud functions simply cannot match without exorbitant costs.

By leveraging CoolVDS and its enterprise-grade NVMe storage, you provide the physical foundation required for high-density container orchestration. You get the developer experience of serverless with the control of bare metal.

Stop renting execution time by the millisecond. Own the infrastructure.

Ready to build? Deploy a high-performance Ubuntu 22.04 instance on CoolVDS today and get your private FaaS cluster running in under 10 minutes.