The "Serverless" Lie: Why Infrastructure Still Matters
Let’s clear the air. "Serverless" is a misnomer that marketing departments love and operations teams tolerate. There are always servers. The only question is: are you renting them by the millisecond at a 400% markup, or are you controlling the metal they run on? For many CTOs in Oslo and across Scandinavia, the initial allure of AWS Lambda or Azure Functions fades quickly when the bill arrives—or when Datatilsynet (The Norwegian Data Protection Authority) starts asking exactly where that data is being processed.
By October 2023, the industry has matured. We aren't just dumping scripts into a black box anymore. We are architecting Event-Driven Systems. The most pragmatic pattern emerging isn't total reliance on public cloud FaaS (Function-as-a-Service), but rather running portable serverless frameworks like OpenFaaS or Knative on top of solid, predictable Infrastructure-as-a-Service (IaaS).
This approach gives you the developer velocity of serverless (git push -> deploy) with the cost predictability and data residency of a VPS located right here in Norway. In this deep dive, we will construct a robust serverless platform using K3s and OpenFaaS, optimized for the high-performance NVMe architecture found in modern VPS environments like CoolVDS.
The Architecture: K3s + OpenFaaS on Bare Metal Speed
Why this stack? Because full Kubernetes is overkill for a medium-sized cluster, and Docker Swarm is fading into obscurity. K3s is a certified Kubernetes distribution built for IoT and Edge computing but is surprisingly potent for production VPS workloads. It strips away the bloat, leaving a binary less than 100MB.
On top of this, we layer OpenFaaS. It allows you to package any process as a Docker container and serve it as a function. The critical component here is the hardware. Serverless patterns rely heavily on cold starts—the time it takes to spin up a container from zero. This is I/O intensive. If your VPS provider is choking your I/O with cheap SATA SSDs or noisy neighbors, your functions will lag.
Pro Tip: Network latency is the silent killer of serverless. If your users are in Oslo and your functions are in Frankfurt, you are adding 20-30ms of round-trip time before the code even executes. Hosting on a local CoolVDS instance in Norway keeps that latency negligible, often under 5ms via NIX (Norwegian Internet Exchange) peering.
Phase 1: The Foundation
First, we prepare the OS. We assume a standard Debian 11 or 12 environment. Before installing orchestration layers, we must tune the kernel for high-throughput networking. "Serverless" implies many short-lived connections, which can exhaust the ephemeral port range.
# /etc/sysctl.conf
# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535
# Enable fast recycling of TIME_WAIT sockets (use with caution, but necessary for high FaaS throughput)
net.ipv4.tcp_tw_reuse = 1
# Maximize the backlog for incoming connections
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
# Increase file descriptors
fs.file-max = 2097152
Apply these changes with sysctl -p. If you skip this, your Gateway will start dropping connections during burst events, regardless of how much CPU you throw at it.
Phase 2: Deploying the Orchestrator
We deploy K3s. We disable the default Traefik ingress because we want granular control over our ingress controller later.
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -
Once the node is ready, verify access. This usually takes less than 30 seconds on CoolVDS NVMe instances due to the rapid disk write speeds during installation.
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# coolvds-node Ready control-plane,master 22s v1.27.4+k3s1
Phase 3: The Serverless Framework
Now we install OpenFaaS using arkade, a tool developed by the OpenFaaS community which simplifies helm chart management.
# Install arkade
curl -sLS https://get.arkade.dev | sudo sh
# Deploy OpenFaaS
arkade install openfaas
This installs the core components: the Gateway (router), the Provider (interfaces with K8s), and Prometheus (auto-scaling metrics). The beauty of running this on a VPS is that the data never leaves your controlled environment. For GDPR compliance, this is a massive win. You aren't shipping customer PII to a US-owned cloud region.
Optimizing the Gateway for Production
The default settings are safe, not fast. For a high-performance production environment, you need to adjust the Gateway's timeouts. If you have a function that processes heavy image data or generates PDFs, the default timeouts will kill the connection prematurely.
Here is an example of an `openfaas` values override file for Helm:
# values-prod.yaml
gateway:
replicas: 2
readTimeout: "60s"
writeTimeout: "60s"
upstreamTimeout: "55s"
# High availability for the queue worker
queueWorker:
replicas: 2
ackWait: "60s"
The Storage Bottleneck: Why NVMe Matters
In a serverless environment, containers are created and destroyed constantly. Each time a function scales from 0 to 1, the container image must be pulled (if not cached) and extracted to the overlay filesystem. This is a disk I/O operation.
On standard spinning rust (HDD) or even budget SATA SSDs offered by cheap VPS providers, this creates a bottleneck. You will see a "Function Call" time of 200ms, but a "Total Duration" of 2 seconds. That 1.8-second gap is your disk struggling to unzip the Docker image.
| Metric | Public Cloud FaaS (Standard) | CoolVDS (NVMe VPS) |
|---|---|---|
| Cold Start Latency | Variable (200ms - 2s) | Consistent (Low I/O Wait) |
| Execution Time Limit | Usually 15 mins max | Unlimited (It's your server) |
| Data Sovereignty | Cloud Provider Region | Strictly Norway (Oslo) |
| Cost Model | Per Request (Unpredictable) | Flat Rate (Predictable TCO) |
Real-World Use Case: Image Processing Pipeline
Let's say you are running an e-commerce site serving the Nordic market. You need to resize uploaded product images. Instead of blocking your main PHP/Node.js web thread, you offload this to a function.
The Python Handler (handler.py):
from PIL import Image
import io
import os
def handle(req):
try:
# Read bytes from request
image_data = io.BytesIO(req.encode('latin1'))
img = Image.open(image_data)
# Resize operation
img.thumbnail((800, 800))
out = io.BytesIO()
img.save(out, format="JPEG", quality=85)
return out.getvalue()
except Exception as e:
return str(e)
The Configuration (stack.yml):
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
image-resize:
lang: python3-http
handler: ./image-resize
image: registry.coolvds-user.com/image-resize:latest
environment:
write_debug: true
# Critical: Hardware limits to prevent CPU stealing
limits:
memory: 256Mi
cpu: 200m
When you deploy this on CoolVDS, you benefit from KVM virtualization. Unlike containers-on-metal (LXC) which some hosts use, KVM ensures that your memory and CPU instructions are isolated. Your neighbor's crypto-mining script won't steal cycles from your image resizing function.
Security & Compliance: The Schrems II Factor
For European companies, sending data to US-controlled cloud providers is legally complex post-Schrems II. By running your own FaaS platform on a Norwegian provider like CoolVDS, you simplify compliance. You know exactly where the physical drive is located. You can encrypt the partition with LUKS. You are in control.
Furthermore, standard DDoS protection is often insufficient for API gateways. You need to configure rate limiting at the application level, but having a host that filters volumetric attacks upstream is non-negotiable. Ensure your VPS provider includes standard L3/L4 protection to keep your gateway accessible.
Conclusion
Serverless is a powerful architectural pattern, but it shouldn't hold your wallet hostage. By leveraging K3s and OpenFaaS on high-performance infrastructure, you reclaim control over costs, latency, and data privacy. You get the developer experience of the cloud with the raw horsepower of bare metal.
Architecture is about making the right trade-offs. If you value low latency to Nordic customers and predictable billing, it's time to own your platform.
Ready to build your private serverless cloud? Deploy a high-frequency NVMe instance on CoolVDS today and see what sub-millisecond I/O does for your cold starts.