Serverless Sovereignty: Implementing Self-Hosted FaaS Patterns on Norwegian Infrastructure
The promise of serverless architecture was absolute abstraction. No servers to manage, infinite scaling, and pay-per-execution. But as we settle into 2021, the honeymoon phase with AWS Lambda and Azure Functions is showing cracks for European CTOs. The billing is unpredictable, the cold starts are real, and after the Schrems II ruling last July, sending customer data to US-owned cloud providers has become a legal minefield regarding GDPR compliance.
I recently consulted for a fintech startup in Oslo. They were bleeding money on API Gateway fees and sweating over Datatilsynet (The Norwegian Data Protection Authority) audits. Their solution wasn't to abandon the serverless pattern, but to repatriate it. By moving from public cloud functions to a self-hosted FaaS (Function as a Service) architecture on bare-metal capable VPS instances, they cut costs by 60% and secured data sovereignty.
The Case for Private Serverless on KVM
Why would a pragmatic CTO want to manage the underlying infrastructure for a "serverless" setup? It sounds counterintuitive. However, "serverless" is an operational model, not just a product. When you control the stack, you control the latency and the jurisdiction.
Running a lightweight Kubernetes distribution (like K3s) with OpenFaaS on high-performance KVM instances allows you to replicate the developer experience of Lambda without the vendor lock-in. Yet, the hardware matters. A function is only as fast as the I/O it waits for. This is where the underlying storage layer becomes critical.
Pro Tip: In a self-hosted FaaS environment, "noisy neighbors" are the enemy of consistency. Container-based virtualization (LXC/OpenVZ) often exposes your functions to CPU steal from other tenants. Always insist on KVM virtualization—standard on CoolVDS—to ensure your kernel resources are isolated.
The Architecture: K3s + OpenFaaS on CoolVDS
Let's build a production-ready FaaS stack. We will use K3s (a lightweight Kubernetes certified distro ideal for edge/VPS) and OpenFaaS. We assume you are running a standard Ubuntu 20.04 LTS instance on a CoolVDS plan with NVMe storage. Spin up time is usually under 60 seconds.
1. Kernel Tuning for High Concurrency
Before touching Kubernetes, we must prep the Linux kernel. Serverless workloads generate massive amounts of short-lived network connections. The default Linux networking stack is too conservative.
Edit your /etc/sysctl.conf to widen the port range and reduce TIME_WAIT reuse:
# /etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
vm.swappiness = 10
Apply these changes with sysctl -p. If you are on a platform with slow HDDs, the vm.swappiness setting won't save you, but on CoolVDS NVMe drives, swap is incredibly fast, though we still prefer RAM.
2. Deploying the Control Plane
K3s removes the bloat. It swaps Docker for containerd and drops legacy cloud providers. This is perfect for a single-node or small-cluster VPS environment.
curl -sfL https://get.k3s.io | sh -
# Verify the node is ready (takes about 30 seconds)
sudo k3s kubectl get node
3. Installing OpenFaaS via Arkade
In 2021, the easiest way to manage OpenFaaS is arkade, a marketplace CLI. It handles the Helm charts and service accounts for you.
# Install Arkade
curl -sLS https://dl.get-arkade.dev | sudo sh
# Install OpenFaaS with basic auth enabled
arkade install openfaas
Once installed, you will need to extract the password to log in to the gateway:
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
export OPENFAAS_URL=http://127.0.0.1:8080
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
The Economic & Legal Reality: VPS vs. Cloud
Here is where the architecture decision hits the balance sheet. A typical AWS Lambda setup charges for compute time (GB-seconds) and requests. If you have a chatty API or a background worker processing image uploads, that bill scales linearly. You also have no guarantee where that data is physically processed unless you strictly region-lock, and even then, US CLOUD Act implications remain.
| Feature | Public Cloud FaaS | Self-Hosted (CoolVDS) |
|---|---|---|
| Cost Model | Per request + GB/s (Unpredictable) | Flat monthly fee (Predictable) |
| Cold Starts | Variable (100ms - 2s) | Tunable (Keep-warm is free) |
| Data Sovereignty | US Jurisdiction Risks | 100% Norway / GDPR Safe |
| Disk I/O | Network Attached (EFS/S3) | Local NVMe (Extreme speed) |
Deployment Example: Python Image Processor
Let's deploy a function. Because we are on a VPS, we can utilize the local NVMe storage for temporary file processing much faster than a Lambda function trying to write to S3.
# Create a new function
faas-cli new --lang python3 image-resize
# handler.py snippet
def handle(req):
"""handle a request to the function
Args:
req (str): request body
"""
return "Processed on KVM: " + req
Building this function requires Docker on your local machine or a CI/CD pipeline, pushing to a registry, and then deploying to your CoolVDS instance. With the low latency to NIX (Norwegian Internet Exchange), the round-trip time for local users is negligible.
Why Infrastructure Integrity Matters
You cannot run a reliable FaaS platform on oversold hardware. Period. When a function triggers, it demands immediate CPU cycles. If your host is stealing cycles for another tenant, your "serverless" API starts timing out.
This is why CoolVDS focuses on dedicated resource allocation within our KVM slices. We use NVMe storage not just for marketing, but because high-density container environments like Kubernetes die on slow I/O (specifically regarding etcd latency). If etcd writes take longer than 10ms, your cluster stability degrades.
By keeping the infrastructure in Norway, you satisfy the legal department. By using high-performance VPS, you satisfy the engineering team. And by paying a flat rate, you satisfy the CFO.
Next Steps
The serverless pattern is powerful, but renting it from the hyperscalers is expensive and legally complex. Take control of your stack. Deploy a K3s cluster on a CoolVDS instance today and benchmark the latency yourself. You will find that raw power, locally hosted, beats the "magic" of the cloud every time.