Serverless is a Pattern, Not a Credit Card Swipe
There is a dangerous misconception circulating in DevOps channels from Oslo to Berlin: that "Serverless" equals AWS Lambda or Azure Functions. It doesn't. Serverless is an architectural pattern decoupled from infrastructure management. When you equate the pattern with a specific vendor's product, you accept two massive liabilities: unpredictable billing spikes and the legal quagmire of data sovereignty under Schrems II.
For Norwegian businesses operating under the watchful eye of Datatilsynet, shipping customer data to a US-controlled hyperscaler involves complex Transfer Impact Assessments (TIAs). The pragmatic solution in 2023 is not to abandon the serverless developer experience, but to repatriate the infrastructure. We are seeing a massive shift towards Self-Hosted FaaS (Function as a Service) on top of high-performance VPS architecture.
This guide documents the architecture we used to migrate a high-throughput image processing pipeline from a public cloud to a cluster of CoolVDS NVMe instances, reducing monthly OpEx by 65% while keeping data strictly within European legal jurisdiction.
The Architecture: K3s + OpenFaaS
We don't need the bloat of full Kubernetes for a focused FaaS cluster. K3s (by Rancher) is a CNCF-certified lightweight distribution that runs exceptionally well on CoolVDS virtual dedicated servers. It strips away legacy cloud providers and storage drivers, resulting in a binary less than 100MB.
Why CoolVDS? Kubernetes, and specifically the etcd datastore, is hyper-sensitive to disk latency. If fsync calls take too long, the cluster leader election fails. CoolVDS provides NVMe storage with high IOPS that K3s demands. Spinning rust or network-choked storage volumes will kill your control plane.
Step 1: The Infrastructure Layer
We deploy three CoolVDS instances (Ubuntu 22.04 LTS). One control plane, two workers. This ensures high availability. If you are serving traffic primarily to Norway, ensure your instances are in a datacenter with direct peering to NIX (Norwegian Internet Exchange) to minimize the round-trip time (RTT).
Step 2: Deploying the Control Plane
On the primary node (Node A), we initialize the cluster. Note that we disable the default Traefik ingress controller because we want granular control with Nginx later.
curl -sfL https://get.k3s.io | sh -s - server \
--disable traefik \
--write-kubeconfig-mode 644 \
--node-name coolvds-master-01
Once the control plane is up, extract the token located at /var/lib/rancher/k3s/server/node-token. You will need this to join the workers.
Step 3: Joining Workers
On Node B and Node C, run:
curl -sfL https://get.k3s.io | sh -s - agent \
--server https://[NODE-A-IP]:6443 \
--token [YOUR-TOKEN] \
--node-name coolvds-worker-01
Deploying the FaaS Engine
We use OpenFaaS. It is mature, supports any language via Docker, and has a lower overhead than Knative. In 2023, the standard way to deploy this is via arkade or Helm. We prefer Helm for reproducibility in GitOps pipelines.
Create a dedicated namespace to keep things clean:
kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
Add the OpenFaaS Helm repo and deploy:
helm repo add openfaas https://openfaas.github.io/faas-netes/
helm repo update \
&& helm upgrade openfaas --install openfaas/openfaas \
--namespace openfaas \
--set functionNamespace=openfaas-fn \
--set generateBasicAuth=true
Pro Tip: By default, OpenFaaS might not set aggressive resource limits. On a VPS environment, you must prevent a rogue function from eating all CPU cycles, which would starve the K3s system processes. Always definelimitsandrequestsin your stack YAML.
The "War Story": Optimizing Cold Starts
In a recent project for a Norwegian logistics firm, we faced a 2-second cold start on their Python microservices running on a competitor's "shared" VPS. The CPU steal time was fluctuating between 10% and 30%. This is the silent killer of serverless performance.
We migrated the workload to CoolVDS. Because CoolVDS uses KVM (Kernel-based Virtual Machine) with strict resource isolation, CPU steal dropped to near zero (~0.1%). However, we still needed to tune the underlying OS for high-throughput network calls typical in a microservices environment.
Here is the sysctl.conf tuning we applied to handle thousands of concurrent function invocations:
# /etc/sysctl.d/99-k8s-networking.conf
# Increase the range of ephemeral ports for high concurrency
net.ipv4.ip_local_port_range = 1024 65535
# Reuse connections in TIME_WAIT state
net.ipv4.tcp_tw_reuse = 1
# Maximize the backlog for incoming connections
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 4096
# Increase file descriptors (crucial for high load)
fs.file-max = 2097152
Apply these changes with sysctl -p. Without this tuning, Nginx acting as the OpenFaaS gateway will hit "Too many open files" errors during traffic spikes.
Comparison: Managed vs. CoolVDS Self-Hosted
| Feature | Public Cloud FaaS (AWS/Azure) | Self-Hosted on CoolVDS |
|---|---|---|
| Cost Predictability | Low (Pay per request) | High (Flat monthly rate) |
| Data Sovereignty | Complex (US Cloud Act issues) | Guaranteed (Norwegian/EU Law) |
| Execution Time Limit | Strict (usually 15 mins) | Unlimited (It's your server) |
| Cold Start Latency | Variable (Vendor dependent) | Tunable (Keep warm or use Async) |
The Verdict
Serverless is powerful. But in 2023, handing over your entire execution layer to a black-box provider is a strategic risk. By layering K3s and OpenFaaS on top of robust, localized infrastructure like CoolVDS, you gain the developer velocity of serverless with the economic and legal stability of bare-metal virtualization.
Your code belongs in a repository, not locked inside a vendor's proprietary dashboard. Your data belongs under your jurisdiction. And your infrastructure should be fast enough to keep up with both.
Ready to build? Don't let IO wait times kill your function performance. Spin up a high-frequency NVMe instance on CoolVDS today and deploy your first function in under 10 minutes.