Serverless is Just Someone Else's Computer (And They Have Your Data)
Let’s cut through the marketing fog. "Serverless" is a brilliant operational model, but a terrible financial and legal trap if you blindly sign up for the big US public clouds. As of mid-2020, with the Schrems II ruling effectively dismantling the Privacy Shield, sending Norwegian customer data to US-owned endpoints (like AWS Lambda or Azure Functions) has become a compliance minefield for any CTO listening to their legal counsel.
I recently audited a fintech startup in Oslo. They were bleeding money on AWS Lambda invocations for a simple data transformation task, and their legal team was in a panic about data sovereignty. The solution wasn't to stop using functions; it was to stop renting them by the millisecond at a 400% markup.
If you are serious about performance and compliance, the architecture pattern for 2021 is Self-Hosted FaaS (Function-as-a-Service). You get the developer velocity of serverless with the raw I/O performance of local NVMe storage and the legal safety of hosting on Norwegian soil.
The Architecture: K3s + OpenFaaS on KVM
We don't need the bloat of full Kubernetes for this. We need lightweight orchestration. The stack effectively looks like this:
- Infrastructure: CoolVDS KVM Instance (Ubuntu 20.04 LTS).
- Orchestrator: K3s (Lightweight Kubernetes from Rancher).
- FaaS Framework: OpenFaaS (The standard for containerized functions).
- Ingress: Traefik (Bundled with K3s).
Step 1: Kernel Tuning for High Concurrency
Before installing anything, we need to prep the OS. Serverless workloads are bursty. They spawn thousands of short-lived connections. Default Linux settings will choke under this load. I've seen connection tracking tables overflow during Black Friday traffic, causing the server to simply drop packets.
On your CoolVDS instance, edit /etc/sysctl.conf. We need to bump the file descriptors and connection tracking limits. Don't ignore this.
# /etc/sysctl.conf optimizations for high-load FaaS
fs.file-max = 2097152
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 16384
net.netfilter.nf_conntrack_max = 262144
net.ipv4.tcp_tw_reuse = 1
Apply it:
sudo sysctl -p
Step 2: Deploying K3s
K3s is a binary usually under 100MB. It strips out legacy cloud providers and storage drivers you don't need on a VPS. It installs in seconds.
curl -sfL https://get.k3s.io | sh -
Once installed, verify your node is ready. If you are using CoolVDS, the high-speed internal network usually means your node status flips to `Ready` almost instantly due to low I/O wait times.
sudo k3s kubectl get node
# NAME STATUS ROLES AGE VERSION
# coolvds-01 Ready master 25s v1.20.0+k3s2
Step 3: Installing OpenFaaS
We use `arkade`, a CLI tool that simplifies installing apps to Kubernetes. It manages the Helm charts for us.
# Get arkade
curl -sLS https://dl.get-arkade.dev | sudo sh
# Install OpenFaaS
arkade install openfaas
Pro Tip: When running on a VPS, ensure you secure the OpenFaaS gateway immediately. By default, it might be exposed. Use `k3s kubectl port-forward` to access it via an SSH tunnel during development rather than exposing port 8080 to the public internet without a firewall rule.
The Cold Start Myth & NVMe Reality
The biggest complaint with Lambda is the "cold start"—the latency incurred when a function scales from zero to one. On public cloud, you have no control over this. You are at the mercy of their scheduler and the neighbors on that physical host.
On a KVM VPS with NVMe storage, this dynamic changes. Since you own the resources, you can keep "watchdog" containers warm without paying extra per second. Furthermore, pulling container images from the local cache on an NVMe drive is orders of magnitude faster than fetching from network storage on a congested public cloud region.
Benchmark: Node.js 12 Function
Let's look at a standard `node12` function. Here is the `handler.js`:
'use strict'
module.exports = async (event, context) => {
const result = {
'status': 'received',
'message': 'Hello from Oslo',
'input': event.body
}
return context
.status(200)
.succeed(result)
}
And the `stack.yml` configuration:
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
oslo-echo:
lang: node12
handler: ./oslo-echo
image: oslo-echo:latest
labels:
com.openfaas.scale.min: 1 # Keep one warm!
com.openfaas.scale.max: 15
By setting `com.openfaas.scale.min: 1`, we eliminate the cold start entirely for the first concurrent user. For subsequent scaling, the containerd runtime on our Ubuntu host spins up new replicas in milliseconds because the IOPS on the underlying storage aren't being throttled.
Data Sovereignty: The Norwegian Advantage
Technical architecture does not exist in a vacuum. It exists within a legal framework. Since the Datatilsynet (Norwegian Data Protection Authority) began enforcing stricter interpretations of GDPR post-Schrems II, the physical location of your server handles the liability.
| Feature | US Public Cloud (Lambda/Functions) | CoolVDS (Self-Hosted OpenFaaS) |
|---|---|---|
| Data Residency | Replicated (US Access Possible) | Strictly Norway (Oslo) |
| Cost Model | Per-invocation (Unpredictable) | Fixed Monthly (Predictable) |
| Latency to NIX | Variable (routed via Sweden/UK) | < 2ms |
| Hardware | Shared, Opaque | Dedicated KVM Resources |
Conclusion
Serverless is not about getting rid of servers. It is about abstracting the application logic from the infrastructure. But someone still has to manage that infrastructure. If you hand that responsibility to Amazon or Google, you are also handing them your wallet and your compliance posture.
Building your own FaaS platform on K3s is not just a "hacker" project anymore; it is a viable enterprise pattern for 2021. It gives you the sub-millisecond latency required for high-frequency trading or real-time gaming backends, specifically when hosted in a datacenter directly peering with the Norwegian Internet Exchange (NIX).
Stop paying a premium for "serverless" marketing. Provision a high-frequency NVMe instance, deploy OpenFaaS, and own your stack.
Ready to build? Deploy a KVM instance with CoolVDS in Oslo today and get root access in under 60 seconds.