Serverless Architecture Patterns: Building a Private FaaS in Norway (Post-Schrems II)
Let’s get one thing straight: "Serverless" does not mean there are no servers. It means you are paying a premium for someone else to manage them, often while locking your logic into a proprietary ecosystem that treats your budget like an all-you-can-eat buffet. For many Norwegian CTOs and Systems Architects, the allure of AWS Lambda or Google Cloud Functions fades rapidly when faced with two realities: the unpredictability of monthly bills at scale, and the legal minefield created by the Schrems II ruling.
If you are processing PII (Personally Identifiable Information) regarding Norwegian citizens, relying on US-owned hyperscalers involves a complex risk assessment that many legal teams are no longer willing to sign off on. The solution isn't to abandon the event-driven, scalable patterns of serverless computing. The solution is to own the platform.
In this analysis, we will deconstruct how to deploy robust Serverless architecture patterns on your own infrastructure within Norway, utilizing CoolVDS high-performance NVMe instances as the foundation. We aren't just saving money; we are reclaiming sovereignty.
The Architecture: Private FaaS on K3s
To replicate the "scale-to-zero" and event-driven capabilities of Lambda without the vendor lock-in, we rely on a stack that has matured significantly by early 2021: Kubernetes (specifically K3s) and OpenFaaS.
Why K3s? Because full upstream Kubernetes is often overkill for a focused FaaS (Function as a Service) cluster. K3s is a compliant distribution that strips away legacy cloud provider add-ons and swaps Docker for containerd, resulting in a binary less than 100MB. This efficiency is critical when we want our compute resources dedicated to function execution, not cluster management overhead.
The Infrastructure Layer: NVMe or Nothing
A common mistake when rolling out self-hosted Kubernetes is underestimating the I/O requirements of etcd, the key-value store that maintains the cluster state. On standard spinning HDD or even cheap SATA SSD VPS hosting, etcd latency spikes will cause leader elections to fail, causing your entire "serverless" platform to seize up under load.
At CoolVDS, we utilize KVM virtualization on pure NVMe storage. This isn't marketing fluff; it's a technical requirement for stability. When etcd writes to disk, it calls fsync to ensure durability. If that fsync takes more than a few milliseconds, your cluster degrades.
Here is how we initialize a robust control plane on a CoolVDS instance running Ubuntu 20.04 LTS:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \
--tls-san=faas.your-domain.no \
--node-taint CriticalAddonsOnly=true:NoExecute \
--disable traefik" sh -
Notice we disable the default Traefik ingress controller. For a production FaaS platform, we need granular control over timeouts and buffering that a custom Nginx or a tuned Ingress setup provides, specifically to handle long-running functions.
Pattern 1: The Asynchronous Decoupler
One of the most powerful serverless patterns is the Asynchronous Decoupler. Your frontend accepts a request (e.g., "Generate PDF Invoice"), acknowledges it immediately, and offloads the heavy lifting to a background function. In the public cloud, you might chain API Gateway to SQS to Lambda.
On our private stack, we use NATS (bundled with OpenFaaS) to handle the queue. This keeps data strictly inside your CoolVDS environment, likely sitting in a datacenter in Oslo, ensuring low latency and GDPR compliance.
Here is a deployment descriptor (stack.yml) for a Python function designed to process these events:
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
invoice-generator:
lang: python3
handler: ./invoice-generator
image: registry.your-company.no/invoice-generator:latest
labels:
com.openfaas.scale.min: 1
com.openfaas.scale.max: 20
annotations:
# Critical for async processing reliability
com.openfaas.health.http.initialDelay: "5s"
topic: "invoice.created"
The topic annotation binds this function to the NATS message bus. When your main application publishes to invoice.created, this function triggers automatically. The scale.max parameter ensures that a sudden influx of requests doesn't consume every CPU cycle on your node, protecting neighbors—though CoolVDS KVM isolation handles that at the hypervisor level, it's good application citizenship.
Pattern 2: The Fan-Out/Aggregator
Another common requirement is querying multiple data sources and aggregating the results. In a microservices environment, doing this sequentially increases latency linearly. The Serverless approach is to "Fan-Out" requests in parallel.
However, network latency (RTT) becomes your enemy here. If your functions are hosted in Frankfurt (AWS `eu-central-1`) but your database is in a legacy datacenter in Norway, the speed of light penalizes you. By hosting the FaaS platform on CoolVDS servers in Norway, you slash RTT to local Norwegian services (like Vipps APIs or Folkeregisteret lookups) to single-digit milliseconds.
Implementation requires tuning the gateway timeouts to allow for the aggregation to complete. If you are using Nginx as your ingress, the default configuration will kill connections too early.
# Inside your nginx.conf or Ingress annotation
http {
...
# Allow long-running aggregation functions
proxy_read_timeout 300s;
proxy_send_timeout 300s;
# Vital for Keep-Alive performance between the Gateway and Functions
upstream openfaas {
server 127.0.0.1:8080;
keepalive 16;
}
}
Pro Tip: Always setvm.overcommit_memory=1in/etc/sysctl.confon your K3s nodes if you are running heavy Redis instances for state management alongside your functions. This prevents the Linux OOM killer from indiscriminately murdering your control plane during memory spikes.
Security: The Norway Advantage
The "Shared Responsibility Model" of public cloud often leaves gaping holes in security configuration. When you deploy K3s on CoolVDS, you control the network policies. Since May 2018 (GDPR implementation), the ability to prove exactly where data resides is paramount.
By using a private VPS, you can implement strict iptables rules that drop all non-Norwegian traffic at the edge, something that costs a fortune in "WAF rules" on hyperscalers.
# Simple but effective geo-blocking logic (conceptual)
# Drop all traffic not from approved subnets before it hits Docker/K3s
iptables -I DOCKER-USER -i eth0 ! -s 123.45.67.0/24 -j DROP
Conclusion: Performance Meets Compliance
Serverless is a developer experience, not strictly a deployment target. You can have the developer velocity of FaaS without the indefinite costs and data sovereignty risks of the US public cloud. By pairing the lightweight architecture of K3s and OpenFaaS with the raw, dedicated NVMe power of CoolVDS, you build a platform that is compliant with Norwegian regulations and incredibly fast.
Don't let latency or legal fears dictate your architecture. Take control of your stack.
Ready to build your private FaaS? Deploy a CoolVDS NVMe instance in Oslo today and experience the difference raw I/O performance makes for Kubernetes.