Serverless is a Lie (And Other Hard Truths)
Let’s clear the air immediately: Serverless does not mean the absence of servers. It means you have outsourced the headache of managing them to someone else—usually at a premium markup and with a significant loss of control. I’ve spent the last decade debugging distributed systems, and nothing raises my blood pressure quite like a stack trace disappearing into a vendor’s opaque control plane.
For developers in Norway and the broader EU, the "Serverless Dream" hits two massive brick walls in 2022: Cold Start Latency and Data Sovereignty (Schrems II). If you are building a banking API in Oslo or a healthcare app in Bergen, routing traffic through a US-owned hyperscaler's function-as-a-service (FaaS) layer is a compliance minefield.
This article explores pragmatic architecture patterns that leverage the developer experience of Serverless without sacrificing the performance and legal safety of dedicated resources. We will look at running OpenFaaS on top of K3s, hosted on high-performance CoolVDS instances.
The Pattern: The "Iron-Backed" FaaS
Public cloud FaaS (Function as a Service) is excellent for sporadic, bursty workloads. But for predictable, high-throughput systems, the billing curve is punitive. Furthermore, the "noisy neighbor" effect on public multi-tenant FaaS can lead to unpredictable execution times.
The alternative? Self-hosted Serverless.
By deploying a lightweight Kubernetes distribution (like K3s) on a dedicated KVM slice, you gain the ability to deploy functions with simple CLI commands while retaining root access to the underlying OS. This allows you to tune the kernel, manage firewall rules via nftables, and ensure your data never leaves Norwegian jurisdiction.
Step 1: The Foundation
To pull this off, you need high IOPS. Serverless workloads are I/O intensive—spinning up containers, mounting volumes, and logging output happens in milliseconds. Standard SATA SSDs will choke here. This is why we default to NVMe storage at CoolVDS. The queue depth on NVMe is essential for handling the concurrent container churn of a FaaS platform.
Here is a battle-tested initialization for a robust node capable of handling K3s. We disable swap to keep the kubelet happy and tune the kernel for high network throughput.
# Disable swap (Critical for K8s stability)
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
# Optimize sysctl for high connection counts
cat <> /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.tcp_max_syn_backlog = 4096
fs.inotify.max_user_instances = 8192
fs.file-max = 100000
EOF
sysctl -p
Step 2: Lightweight Orchestration with K3s
We don't need the bloat of a full K8s distribution (like kubeadm) for a single-node or small cluster FaaS setup. K3s is a binary under 100MB that removes legacy cloud providers and storage drivers we don't need.
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -
Pro Tip: We disable the default Traefik ingress controller here because we want to use OpenFaaS’s gateway or a custom Nginx ingress for finer control over SSL termination and rate limiting, specifically tailored to handle DDoS attempts typical in the Nordic region.
Implementing OpenFaaS on CoolVDS
OpenFaaS (Function as a Service) sits on top of Kubernetes. It gives you that sweet "deploy a function" experience but runs on your own hardware.
First, install `arkade`, the marketplace installer for Kubernetes apps, which simplifies the deployment significantly.
curl -sLS https://get.arkade.dev | sh
# Install OpenFaaS
arkade install openfaas
Once running, you effectively have your own AWS Lambda, but running on a CoolVDS instance in Oslo. The latency from a user in Norway to your function is now determined by local peering (NIX), not a roundtrip to a data center in Frankfurt or Ireland.
The "Resize-On-The-Fly" Pattern for Images
A classic serverless use case is image processing. Let's look at a stack.yml configuration for a Python function that resizes images. Note the resource limits—on a dedicated VDS, you can be more generous than public cloud tiers allow.
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
img-resize:
lang: python3-http
handler: ./img-resize
image: registry.coolvds.com/myuser/img-resize:latest
environment:
write_debug: true
read_timeout: 10s
write_timeout: 10s
limits:
memory: 256Mi
cpu: 200m
annotations:
com.openfaas.health.http.path: "/_/health"
com.openfaas.health.http.initialDelay: "2s"
Comparison: Public Cloud vs. CoolVDS Hybrid
Why go through the trouble of setting up K3s? It comes down to TCO (Total Cost of Ownership) and Performance consistency.
| Feature | Public Cloud FaaS | OpenFaaS on CoolVDS |
|---|---|---|
| Cold Start | Unpredictable (100ms - 2s) | Tunable (Keep-warm is free) |
| Execution Time Limit | Often 15 mins max | Unlimited |
| Data Location | Opaque (GDPR Risk) | Strictly Norway (Safe) |
| Cost | Per invocation (Expensive at scale) | Flat monthly rate |
| Storage I/O | Network throttled | Local NVMe |
Handling State: The Achilles Heel
Pure serverless functions are stateless. But real applications have state. In a public cloud, you are forced to use their managed databases (DynamoDB, Firestore), locking you in further.
With the CoolVDS hybrid approach, you can run a Redis or PostgreSQL instance on the same VDS (or a private networked sibling) for microsecond-latency state access. No API gateway hops. No egress fees.
Here is a snippet to optimize Redis for this "sidecar" pattern within your infrastructure:
# redis.conf optimizations for local FaaS usage
maxmemory 512mb
maxmemory-policy allkeys-lru
# Disable RDB persistence if using only for cache to save I/O
save ""
appendonly no
# Bind only to local interface or VPC IP
bind 127.0.0.1 10.0.0.5
Compliance is Not Optional
Since the 2020 Schrems II ruling, relying on US-based cloud providers for processing EU citizen data has become legally hazardous. The Norwegian Data Protection Authority (Datatilsynet) has been increasingly vocal about data transfers.
When you deploy OpenFaaS on CoolVDS, you are the data processor. The hardware sits in Oslo. You control the encryption keys. There is no murky "cloud shared responsibility model" regarding where the bits physically reside. For a CTO, this isn't just a technical detail; it's an insurance policy.
Conclusion
Serverless architecture is a powerful pattern, not a product you must buy from a hyperscaler. By decoupling the pattern from the vendor, you gain speed, lower costs, and better compliance.
Don't let latency or legal fears paralyze your next deployment. Spin up a CoolVDS NVMe instance today, install K3s, and build a serverless platform that you actually own.