Serverless Without the Lock-in: Architecting High-Performance FaaS on Infrastructure You Control
Let’s clear the air immediately: "Serverless" is a lie. There are always servers. The only question that matters to a CTO or a Lead Architect is: Who controls those servers, and where are they physically located?
If you have been deploying to AWS Lambda or Azure Functions lately, you know the drill. It starts as a developer's dream—git push and done. But then reality hits. Cold starts add 300ms to your latency. The monthly bill becomes a complex puzzle of invocation counts and GB-seconds. And worst of all, your data is bouncing around regions that might not sit well with the Datatilsynet (Norwegian Data Protection Authority) or strict GDPR interpretation.
In 2019, the smart play isn't necessarily abandoning the serverless pattern; it's abandoning the proprietary platforms that hold it hostage. We are seeing a massive shift towards FaaS (Functions as a Service) on top of Kubernetes, running on high-performance, predictable infrastructure.
The Architecture: Why Roll Your Own FaaS?
The "Battle-Hardened" approach to serverless involves decoupling the developer experience from the execution environment. You want your devs to write functions, but you want your ops team to manage the iron.
Why? IOPS and Latency.
When you run a function in a public cloud, you are at the mercy of their scheduler. You might get a noisy neighbor. You might get legacy hardware. When you deploy a FaaS framework like OpenFaaS on a KVM-based VDS (Virtual Dedicated Server), you control the resource allocation. Specifically, if you choose a provider like CoolVDS, you are running on local NVMe storage. In a function-heavy architecture where containers are spun up and down in seconds, disk I/O is often the bottleneck that nobody talks about.
The Stack
Here is the reference architecture we are seeing deploy successfully in Oslo-based tech clusters right now:
- Infrastructure: KVM-based VDS (CoolVDS NVMe Instances)
- Orchestration: Kubernetes (v1.13)
- FaaS Framework: OpenFaaS
- Ingress: Nginx or Traefik
Implementation: Deploying OpenFaaS on Kubernetes
Assuming you have a Kubernetes cluster running on your VPS nodes (we recommend using kubeadm for a clean 2019-standard install), deploying the serverless framework is straightforward. We will use Helm, as it simplifies the management of the OpenFaaS components.
First, create the namespaces specifically for OpenFaaS to keep things clean:
kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
Next, we add the OpenFaaS helm chart. This is critical because it allows us to tune the configuration. We don't want defaults; we want performance.
helm repo add openfaas https://openfaas.github.io/faas-netes/
helm repo update
Now, let's deploy. Here is where the CoolVDS advantage comes in. Because we have high-throughput NVMe, we can be aggressive with our queue worker settings. We want asynchronous functions to process instantly.
helm upgrade openfaas --install openfaas/openfaas \
--namespace openfaas \
--set functionNamespace=openfaas-fn \
--set async=true \
--set queueWorker.replicas=3
Defining a Function
The beauty of this setup is the stack.yml. It’s vendor-agnostic. You define your function, its language, and its scaling parameters. Here is an example of a Python function designed to process image metadata—a task that is I/O heavy and benefits from local NVMe speeds:
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
image-resizer:
lang: python3
handler: ./image-resizer
image: registry.coolvds-client.no/image-resizer:0.1
environment:
write_debug: true
read_timeout: 10
write_timeout: 10
limits:
memory: 128Mi
labels:
com.openfaas.scale.min: 2
com.openfaas.scale.max: 15
Notice the com.openfaas.scale.min: 2 label. This is how we defeat the