Serverless Without the Handcuffs: Implementing Private FaaS Patterns in a Post-Schrems II World
Let’s address the elephant in the server room. The term "Serverless" is marketing genius and engineering nonsense. There are always servers. The only question is: do you control them, or do you rent them by the millisecond at a 400% markup?
As a Systems Architect operating out of Northern Europe, 2020 has thrown us a massive curveball. I’m not talking about the pandemic (though that’s changed traffic patterns extensively). I’m talking about the CJEU's Schrems II ruling from July. The Privacy Shield is dead. If you are processing Norwegian customer data on US-owned public clouds (AWS Lambda, Google Cloud Functions), you are now walking a legal tightrope without a net.
Suddenly, the "easy" route of public cloud FaaS isn't just expensive; it's a compliance nightmare for Datatilsynet (The Norwegian Data Protection Authority). The solution isn't to abandon the architectural elegance of event-driven code. The solution is to bring it home.
In this deep dive, we are going to look at the Private FaaS pattern. We will build a platform that gives developers the "git push" experience they love, but runs on high-performance infrastructure legally domiciled in Norway. We are trading vendor lock-in for engineering freedom.
The Architecture: Why Bare Metal/VPS beats Public Cloud FaaS
Public cloud serverless functions suffer from the "Cold Start" problem. If your function hasn't run in a few minutes, the provider spins down the container. The next request waits for the microVM to boot, the runtime to initialize, and your code to load. In high-frequency trading or real-time bidding systems, that 400ms-2s delay is unacceptable.
By running a lightweight orchestration layer on a dedicated KVM slice, we keep our runtime warm. We control the eviction policies. We control the network.
Pro Tip: Network I/O is the silent killer in microservices. On public clouds, you often hit noisy neighbor issues. On CoolVDS, we utilize local NVMe storage which provides the IOPS necessary to pull container images instantly and handle high-throughput logging without the "iowait" spike that kills latency.
The Stack for Late 2020
We aren't going to build this from scratch. We will use the industry standards that have matured significantly this year:
- Infrastructure: CoolVDS NVMe Instance (Ubuntu 20.04 LTS).
- Orchestrator: K3s (Lightweight Kubernetes). It strips out the bloated cloud-provider plugins we don't need.
- FaaS Framework: OpenFaaS. It’s Kubernetes-native, mature, and easy to secure.
Step 1: The Foundation
First, we need a solid base. I recommend at least 2 vCPUs and 4GB RAM for a production-grade FaaS node. Since we are using KVM, we have full kernel access, which is critical for container networking.
SSH into your instance and optimize the sysctl settings for high-concurrency connections. By default, Linux is tuned for long-lived connections, not the bursty nature of FaaS.
# /etc/sysctl.conf
# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535
# Allow reusing sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1
# Increase max open files for high container density
fs.file-max = 2097152
fs.inotify.max_user_watches = 524288
Apply these changes with sysctl -p. If you skip this, your FaaS gateway will choke under load, not because of CPU, but because it ran out of file descriptors.
Step 2: Deploying the Orchestrator
We will use K3s. It is a fully compliant Kubernetes distribution packaged in a single binary. It is perfect for a VPS environment because it uses less than 512MB of RAM to run the control plane.
curl -sfL https://get.k3s.io | sh -
# Verify installation
sudo k3s kubectl get nodes
You should see your node status as Ready in under 30 seconds. This speed is why we prefer K3s over `kubeadm` for single-cluster deployments.
Step 3: Installing OpenFaaS
In 2020, the easiest way to install OpenFaaS is via arkade, a tool built by the OpenFaaS community, or standard Helm 3 charts. We will use Helm 3 for better lifecycle management.
# Install Helm 3 if you haven't already
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
# Add the OpenFaaS chart repo
helm repo add openfaas https://openfaas.github.io/faas-netes/
helm repo update
# Create a namespace for OpenFaaS functions
kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
# Generate a random password
export PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)
kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password="$PASSWORD"
# Deploy OpenFaaS
helm upgrade openfaas --install openfaas/openfaas \
--namespace openfaas \
--set basic_auth=true \
--set functionNamespace=openfaas-fn
This setup deploys the Gateway, the Queue Worker (NATS), and Prometheus for auto-scaling metrics.
Step 4: The Developer Experience
Now you have a platform. How do you use it? You use the `faas-cli`. This is where the magic happens. You don't need to write Dockerfiles if you don't want to. OpenFaaS templates handle the scaffolding.
# Install CLI
curl -sL https://cli.openfaas.com | sudo sh
# Login to your new private cloud
export OPENFAAS_URL=http://127.0.0.1:8080
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
# Create a new Python function
faas-cli new --lang python3-http order-processor
This generates a directory structure. Edit order-processor/handler.py:
def handle(req):
"""handle a request to the function
Args:
req (str): request body
"""
return "Order processed in Norway. Compliance check: PASSED."
Deploying it is a single command:
faas-cli up -f order-processor.yml
Under the hood, this builds the Docker image, pushes it to your registry (or local cache), and instructs Kubernetes to deploy the pod. Because you are on CoolVDS with local NVMe, the image build and extraction are incredibly fast.
Comparison: Public Cloud vs. CoolVDS Private FaaS
| Feature | Public Cloud FaaS (AWS/GCP) | Private FaaS (CoolVDS) |
|---|---|---|
| Data Sovereignty | Uncertain (US CLOUD Act / Schrems II issues) | 100% Norway/EEA Controlled |
| Billing | Per invocation (Hard to predict) | Flat Monthly Fee (Predictable) |
| Cold Starts | Variable (100ms - 2s) | Near Zero (Tunable keep-alive) |
| Execution Time Limit | Usually 15 minutes max | Unlimited |
The Economic Argument
The