Serverless Architectures: Taking Back Control from the Cloud Giants
Let's cut through the marketing noise. "Serverless" is the most expensive way to run compute if you have a consistent load. The cloud providers sold us a dream of "No Ops," but what they actually delivered was "Someone Else's Ops" at a 300% markup, coupled with a complete loss of visibility into the underlying infrastructure.
If you are running a startup in Oslo or a dev team in Berlin, the landscape changed dramatically in July 2020. The CJEU's Schrems II ruling invalidated the Privacy Shield. If you are piping user data through AWS Lambda or Google Cloud Functions located in (or controlled by) US jurisdictions, you are now walking a legal tightrope without a net. The Norwegian Datatilsynet isn't known for looking the other way.
But the pattern of Serverless—event-driven, ephemeral containers, scaling to zero—is brilliant. We don't want to lose that developer experience. We just want to lose the vendor lock-in and the data risk. The solution? Run the Serverless pattern on your own metal.
The Architecture: OpenFaaS on K3s
In late 2020, the most robust path to self-hosted serverless is OpenFaaS running on top of K3s (a lightweight Kubernetes distribution). This stack gives you the API gateway, the auto-scaling, and the metrics, but you run it on a high-performance VPS where you control the disk, the network, and the jurisdiction.
Why CoolVDS?
I mention CoolVDS here not because I have to, but because serverless architectures are I/O bound during "cold starts." When a function wakes up, it has to pull a Docker image, extract layers, and start the runtime. On a standard HDD or a throttled cloud instance, this takes seconds. On CoolVDS's local NVMe storage, it takes milliseconds. If your underlying infrastructure has high I/O wait, your "instant" functions will feel sluggish.
Step 1: The Foundation (K3s)
We need a container orchestrator that doesn't eat half our RAM before we even deploy a function. K3s is perfect for this. Assume you have provisioned a CoolVDS instance with Ubuntu 20.04 LTS.
# rigorous system update first
sudo apt update && sudo apt upgrade -y
# Install K3s (lightweight Kubernetes)
curl -sfL https://get.k3s.io | sh -
# Verify the node is ready (takes about 30 seconds)
sudo k3s kubectl get node
Unlike massive K8s distributions, K3s is a single binary. It reduces the memory footprint, leaving more resources for your actual functions.
Step 2: Deploying OpenFaaS
We will use arkade, a CLI tool that simplifies Kubernetes app installation, which has become the standard for OpenFaaS deployments this year.
# Install arkade
curl -sLS https://dl.get-arkade.dev | sudo sh
# Install OpenFaaS onto the K3s cluster
arkade install openfaas
# Check the deployment status
sudo k3s kubectl -n openfaas get deployments -l "release=openfaas, app=openfaas"
Pro Tip: By default, OpenFaaS creates a LoadBalancer service. On a VPS without a cloud load balancer, use the node port or install an Ingress Controller (like Nginx) to route traffic on port 80/443. For production, always terminate TLS using cert-manager.
Step 3: Creating a Function
The beauty of this setup is the faas-cli. It standardizes the build/push/deploy workflow.
# Install the CLI
curl -sL https://cli.openfaas.com | sudo sh
# Log in to your gateway (password is generated during install)
export PASSWORD=$(sudo k3s kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
# Create a new python function
faas-cli new --lang python3 data-processor
This creates a folder structure. Let's look at data-processor/handler.py. We can write standard Python code.
def handle(req):
"""handle a request to the function
Args:
req (str): request body
"""
return "Processed data safely in Norway: " + req
Now, build and deploy. This is where the NVMe storage on CoolVDS shines. Building the container image involves heavy disk writes. Slow disks make this agonizing.
faas-cli up -f data-processor.yml
Performance Tuning for Low Latency
If you are migrating from AWS Lambda, you might be used to cold starts being "just how it is." On your own infrastructure, you can tune this.
1. Keep-Alive Configuration
In your function's YAML file, you can prevent the function from scaling down to zero if you have the RAM to spare (and on CoolVDS, RAM is cheaper than AWS).
labels:
com.openfaas.scale.min: "1"
com.openfaas.scale.max: "20"
2. Database Connections
A common mistake in serverless is opening a new database connection for every request. Since we are using a VPS, we can run a connection pooler like PgBouncer locally or as a sidecar. This drastically reduces the overhead on your database.
For MySQL/MariaDB (common in the hosting world), ensure your `my.cnf` handles the connection churn:
[mysqld]
max_connections = 500
thread_cache_size = 50
# Essential for high-concurrency environments
innodb_thread_concurrency = 0
The Economic & Legal Reality
Let's look at the numbers. A 4GB RAM instance on a major public cloud can cost upwards of $40/month just for the compute, plus bandwidth egress fees, plus the premium for the "Serverless" abstraction layer.
| Feature | Public Cloud FaaS | Self-Hosted (CoolVDS) |
|---|---|---|
| Data Sovereignty | Murky (US CLOUD Act applies) | Strict (Data stays in Norway/EU) |
| Cost Scaling | Linear (Pay per invocation) | Flat (Pay for capacity) |
| Cold Start Latency | Variable (Noisy neighbors) | Predictable (Dedicated resources) |
| Execution Time Limit | Often 15 minutes max | Unlimited |
For Norwegian businesses dealing with sensitive data, the choice is becoming less about preference and more about compliance. Using a local VPS provider like CoolVDS ensures that when you say your data is in Oslo, it actually stays in Oslo, not replicated to a bucket in Virginia for "redundancy."
Conclusion
Serverless is a powerful architectural pattern, but it shouldn't cost you your autonomy. By combining the efficiency of K3s and OpenFaaS with the raw power and low latency of CoolVDS NVMe instances, you get the best of both worlds: the developer velocity of serverless and the control of bare metal.
Don't let your infrastructure budget bleed out on per-millisecond billing. Spin up a high-performance NVMe instance on CoolVDS today and deploy your first function in under 5 minutes.