Console Login

The "Private FaaS" Pattern: Running Serverless Architectures Without the Public Cloud Tax

The "Private FaaS" Pattern: Running Serverless Architectures Without the Public Cloud Tax

"Serverless" is a misnomer that has cost companies more money than perhaps any other buzzword in the last five years. We were promised infinite scaling and zero management. What we got were cold starts, impossible debugging sessions, and unpredictable billing that makes a CFO weep.

Don't get me wrong. The architecture of serverless—event-driven, ephemeral containers—is brilliant. It allows developers to ship code without worrying about OS patches. But the deployment model (renting functions by the millisecond from US hyperscalers) is flawed for high-throughput workloads, especially here in Europe.

Since the Schrems II ruling last year, the legal landscape in Norway has shifted violently. Sending customer data to a US-controlled endpoint (even one in a Frankfurt region) is a compliance minefield. If you are handling sensitive Norwegian data, Datatilsynet is watching.

This brings us to the architecture pattern gaining traction in late 2021: Private FaaS (Function-as-a-Service). This implies running a serverless framework like OpenFaaS or Knative on top of your own infrastructure.

The Architecture: K3s + OpenFaaS on NVMe

The goal is to replicate the developer experience of AWS Lambda but with the cost predictability and data sovereignty of a dedicated VPS. To do this effectively, we need a lightweight orchestration layer. Standard Kubernetes (k8s) is often too heavy for a single node or small cluster. We use K3s.

Why K3s? It strips away the bloat. It runs effectively on a single CoolVDS instance with 4GB RAM, whereas standard K8s would eat half that memory just for the control plane. We pair this with OpenFaaS, which provides the API gateway and function watchdog.

Step 1: The Foundation

Latency kills serverless. If you are serving users in Oslo, your metal needs to be in Oslo. Speed of light is not negotiable. We provision a CoolVDS instance running Ubuntu 20.04 LTS. We specifically choose the NVMe storage tier because FaaS workloads are I/O heavy—constantly pulling images and writing logs.

First, we harden the node and install K3s. Don't use the default setup; we need to optimize for performance:

# Install K3s without Traefik (we will use OpenFaaS gateway)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -

# Verify the node is ready (should take < 30 seconds on CoolVDS NVMe)
sudo k3s kubectl get node

Step 2: Deploying the Serverless Framework

In 2021, the easiest way to manage OpenFaaS is via arkade, a marketplace CLI for Kubernetes apps. It handles the Helm charts for us.

# Install arkade
curl -sLS https://get.arkade.dev | sudo sh

# Deploy OpenFaaS to the K3s cluster
arkade install openfaas

# Wait for the rollout
kubectl rollout status -n openfaas deploy/gateway

At this point, you have a functional serverless platform running entirely on your own terms. No data leaves the server. No per-invocation billing shock.

Optimizing for High Load

The default settings for Linux kernels are generally tuned for long-running processes, not thousands of ephemeral containers spawning and dying every minute. If you try to run high-concurrency workloads without tuning, you will hit limits.

We need to adjust sysctl.conf to handle the network stack and file descriptors required by the function containers.

# /etc/sysctl.d/99-serverless.conf

# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535

# Allow more connections to be handled
net.core.somaxconn = 4096

# Fast recycling of TIME_WAIT sockets (essential for high HTTP churn)
net.ipv4.tcp_tw_reuse = 1

# Increase max open files for the container runtime
fs.file-max = 2097152

Apply these with sysctl -p /etc/sysctl.d/99-serverless.conf. On a standard VPS provider, you might face "noisy neighbor" issues where CPU steal time affects your function execution time. This is where the underlying hypervisor matters. At CoolVDS, we enforce strict KVM isolation, meaning your CPU cycles are yours. In a FaaS environment, CPU steal is the enemy of consistent latency.

Pro Tip: If you are using Node.js 14 functions, set the `exec_timeout` in your OpenFaaS YAML to a lower value than your gateway timeout to catch hanging promises before the orchestrator kills the pod forcibly.

The Economic Argument

Let's do the math. A typical serverless bill for a mid-sized e-commerce API (5 million requests/month, 512MB RAM, 200ms duration) on public cloud can easily run into hundreds of Euros once you add API Gateway fees, NAT Gateway charges (the hidden killer), and data egress.

Comparison:

Feature Public Cloud FaaS CoolVDS Private FaaS
Compute Cost Per ms (Variable) Fixed Monthly
Egress Cost Expensive ($0.09/GB+) Included / Low
Cold Starts Common (unless provisioned) Zero (Hot Pods)
Data Sovereignty Unclear (US Cloud Act) 100% Norway

The Hybrid Approach: "The Mullet Architecture"

You don't have to be a purist. The most robust pattern we see among successful DevOps teams in Oslo is the hybrid approach.

They use a public CDN (like Cloudflare) for the static frontend assets. But the API calls—the logic that touches the database and user data—route directly to a CoolVDS instance running the OpenFaaS stack described above. The database sits on the same private network (or localhost for smaller setups), communicating over a socket file rather than a network port.

This creates a system that feels like a modern JAMstack app but operates with the security profile of a traditional on-premise server. You get the git push deployment workflow via OpenFaaS build hooks, but you own the metal.

Conclusion

Serverless is not about getting rid of servers. It's about getting rid of manual server management. By using tools like K3s and OpenFaaS, you can build a platform that automates the scaling of your application code while retaining the performance of raw NVMe storage and the legal safety of Norwegian hosting.

Stop paying a premium for the privilege of losing control over your infrastructure. Deploy your own Private FaaS cluster today.

Ready to build? Spin up a high-performance KVM instance on CoolVDS in Oslo. Low latency, NVMe speeds, and no hidden costs.