The "Serverless" Lie: Why Your Event-Driven Architecture Needs Iron, Not Just Code
Letâs cut through the marketing fluff. "Serverless" is a misnomer that has tricked a generation of developers into thinking infrastructure doesn't matter. It matters more than ever. When you deploy a function to a hyperscaler in Frankfurt, you aren't removing operations; you're just outsourcing them to a black box that charges you a premium for the privilege of variable latency.
I've debugged enough production outages to know that the "No Ops" promise is a myth. I recall a project last year for a fintech startup in Bergen. They went all-in on AWS Lambda. It worked fine until their traffic spiked during a marketing campaign. Their bill didn't just scale linearly; it went exponential due to API Gateway integration costs, and their "warm" functions were still hitting 200ms latency penalties for users in Northern Norway.
For Nordic developers facing strict GDPR requirements and the technical reality of the physics of light, there is a better way. We call it Sovereign Serverless. Itâs about running FaaS (Function as a Service) patterns on your own infrastructure.
The Architecture: OpenFaaS on K3s
Why run your own? Control. When you control the node, you control the nice values, the I/O schedulers, and exactly where the data lives (crucial for Datatilsynet compliance). We will use K3s (a lightweight Kubernetes) and OpenFaaS. This stack allows you to deploy functions with Docker containers while retaining the event-driven benefits, without the vendor lock-in.
Step 1: The Foundation
First, you need a machine that doesn't steal CPU cycles. On shared hosting, your neighbor's WordPress hacked site kills your function's cold start time. This is why we use CoolVDS instances with dedicated KVM slicing. The underlying NVMe storage is critical hereâpulling container images for a new function needs to happen in milliseconds.
Here is the initialization for a standard K3s cluster on a CoolVDS node running Ubuntu 22.04 LTS:
# Install K3s without the Traefik ingress (we will use Nginx or Arkade)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
# Verify the node is ready (CoolVDS nodes usually ready in <15s)
sudo k3s kubectl get node
Step 2: Deploying the FaaS Framework
We use arkade, a CLI tool that simplifies Kubernetes app installation. Itâs cleaner than managing raw Helm charts manually for this setup.
# Install Arkade
curl -sLS https://dl.get-arkade.dev | sudo sh
# Deploy OpenFaaS with basic auth enabled
arkade install openfaas \
--load-balancer
# Check the roll-out status
kubectl -n openfaas rollout status deploy/gateway
Once the gateway is up, you have a private serverless endpoint in Oslo. No data leaves the country. Your latency to the NIX (Norwegian Internet Exchange) is practically zero.
Optimizing for Cold Starts: The NVMe Factor
The biggest pain point in serverless is the "cold start"âthe time it takes to spin up a container when a request hits an idle function. Hyperscalers mask this with complex caching, but on your own infrastructure, raw disk I/O is your savior.
On a spinning HDD or cheap SSD, pulling a 500MB Python image takes seconds. On CoolVDS NVMe arrays, we regularly clock read speeds in excess of 2000 MB/s. This reduces cold starts to near-instant execution.
Pro Tip: Tune yourcontainerdconfiguration to utilize snapshotters. If you are using massive images, enablestargz-snapshotter(lazy pulling). However, on CoolVDS NVMe, standard overlayfs is usually fast enough to beat AWS Lambda cold starts without the complexity.
Pattern: The "Async Worker" Queue
Don't process heavy data in the HTTP request loop. Even with your own FaaS, timeouts exist. Use NATS (bundled with OpenFaaS) for asynchronous processing. Here is how you define a function stack in stack.yml that relies on heavy processing, suitable for a VPS with dedicated cores:
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
img-resize:
lang: python3-http
handler: ./img-resize
image: registry.coolvds.com/myorg/img-resize:latest
labels:
com.openfaas.scale.min: "0"
com.openfaas.scale.max: "10"
annotations:
topic: "image-upload"
limits:
memory: 512Mi
cpu: 500m
Notice the limits. On a cloud provider, requesting 500m (half a core) costs money. On your CoolVDS node, you already paid for the core. Use it.
Handling Persistence and State
Pure serverless functions should be stateless. But real apps aren't. You need a database. Connecting a Lambda function to an RDS instance often involves complex VPC peering and NAT Gateways. In our self-hosted architecture, it's a local network call.
Install Redis for fast state caching on the same node (or a private LAN node) to keep latency under 1ms:
# sysctl.conf tuning for high-throughput Redis on Linux
vm.overcommit_memory = 1
net.core.somaxconn = 65535
# Apply without reboot
sudo sysctl -p
This sysctl tuning is often locked down on managed PaaS solutions. On a VPS, you are the root user. You define the network stack.
The Verdict: Renting vs. Owning
There is a time for managed serverless. If you run a function once a month, use Lambda. But if you are processing thousands of eventsâwebhooks, image transformations, or IoT data ingestionâthe economics flip immediately.
| Feature | Managed Cloud FaaS | Self-Hosted (CoolVDS) |
|---|---|---|
| Cost Predictability | Low (Pay per req) | High (Flat monthly) |
| Execution Time Limit | 15 min (usually) | Unlimited |
| Data Sovereignty | Complex (US Cloud Act) | Guaranteed (Norway) |
| Cold Start Latency | Variable | Deterministic (Hardware dependent) |
Building a self-hosted serverless platform requires initial effort. You have to configure the K3s cluster and secure the gateway. But once it's running, it is a tank. It handles traffic spikes without bankruptcy risk, and it keeps your customer data firmly within Norwegian jurisdiction.
Don't let latency or legal gray areas dictate your architecture. Take control of your stack.
Ready to build your sovereign FaaS cluster? Deploy a high-performance NVMe instance on CoolVDS today and get root access in under 55 seconds.