Serverless Without the Cloud Tax: Building Event-Driven Architectures in a Post-Schrems II World
Let’s be honest for a second. "Serverless" is a marketing term. There are always servers. The only difference is whether you control them, or if you're renting them by the millisecond at a 400% markup while praying your bill doesn't explode due to a recursive loop in a Lambda function.
As we stand here in August 2020, the landscape has shifted violently. The CJEU's recent Schrems II ruling invalidated the Privacy Shield framework, effectively turning data transfers to US-owned hyperscalers (AWS, Azure, GCP) into a legal minefield for Norwegian businesses. If you are processing personal data, "just putting it in us-east-1"—or even eu-central-1 if the provider is US-owned—is no longer a safe default.
Does this mean you have to abandon the architectural elegance of event-driven, ephemeral compute? Absolutely not. It means you need to own the platform.
This guide explores how to implement Serverless patterns using OpenFaaS and Kubernetes (K3s) on bare-metal grade VPS instances. We get the developer experience of FaaS, the fixed costs of a VPS, and the legal safety of hosting right here in Norway.
The Architecture: "Private Serverless"
The core benefit of Serverless is the event loop: Trigger -> Action -> Result. We can replicate this without the vendor lock-in using a lightweight Kubernetes distribution and a FaaS framework. For this setup, latency is our enemy. If you are serving customers in Oslo or Bergen, routing traffic through Frankfurt adds unnecessary milliseconds. Physics is stubborn.
The Stack
- Infrastructure: CoolVDS KVM Instances (High-frequency Compute, NVMe).
- Orchestrator: K3s (Lightweight Kubernetes by Rancher).
- FaaS Framework: OpenFaaS (Serverless framework for Docker).
- Message Broker: NATS (embedded in OpenFaaS) or RabbitMQ.
Step 1: The Foundation (K3s on NVMe)
Why NVMe? Because "Cold Starts" are primarily an I/O problem. When a function wakes up, it has to load libraries and runtimes into memory. On standard spinning rust or even cheap SATA SSDs, this lags. On the NVMe storage we use at CoolVDS, it's near-instant.
Here is how we bootstrap a K3s cluster on a fresh CentOS 8 or Ubuntu 20.04 node. We disable the default Traefik to install a custom ingress later if needed.
# Install K3s without Traefik (we will configure Ingress manually for fine-tuning)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -
# Verify the node is ready (takes about 30 seconds)
sudo k3s kubectl get node
# Output should look like:
# NAME STATUS ROLES AGE VERSION
# coolvds-node Ready master 45s v1.18.6+k3s1
Pro Tip: If you are running high-throughput workloads, tweak your sysctl settings to handle more connections. The default Linux networking stack is too conservative for FaaS.
# /etc/sysctl.d/99-k8s-networking.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.core.somaxconn = 65535
net.ipv4.tcp_max_tw_buckets = 1440000
Step 2: Deploying OpenFaaS
OpenFaaS sits on top of Kubernetes and manages the scaling of your functions. It converts Docker containers into serverless functions. We'll use arkade, the CLI tool for installing apps to Kubernetes, which is becoming the standard in 2020.
# Get arkade
curl -SLfs https://dl.get-arkade.dev | sudo sh
# Install OpenFaaS
arkade install openfaas
# Check the rollout status
kubectl rollout status -n openfaas deploy/gateway
Once the gateway is up, you have a local FaaS environment. No credit card required, no per-request billing, just raw compute.
Step 3: Writing a "Schrems-Safe" Function
Let's write a Python function that anonymizes user data before storing it. This is a common GDPR pattern: separate the ingestion from the processing to ensure PII (Personally Identifiable Information) is scrubbed.
First, pull the templates:
faas-cli template store list
faas-cli new --lang python3 gdpr-anonymizer
Now, let's look at the handler.py. We keep this stateless.
import json
import hashlib
def handle(req):
"""
Payload format: {"user_id": "12345", "email": "customer@example.no", "data": "..."}
"""
try:
payload = json.loads(req)
# Salt and hash the email - irreversible pseudonymization
salt = "s3cr3t_s@lt_from_env_vars"
email_hash = hashlib.sha256((payload['email'] + salt).encode()).hexdigest()
result = {
"user_id": payload['user_id'],
"email_hash": email_hash,
"status": "anonymized",
"region": "NO-Oslo-1" # Tagging for data sovereignty
}
return json.dumps(result)
except Exception as e:
return json.dumps({"error": str(e)})
The beauty of this setup on CoolVDS is the Network I/O. When this function scales to 1,000 replicas, the inter-pod communication on our KVM virtualization layer (virtio) handles the packet switching significantly faster than the noisy-neighbor environment of a shared hosting plan.
Performance Comparison: Hyperscaler vs. CoolVDS NVMe
We ran a benchmark using `hey` (an HTTP load generator) against a standard public cloud FaaS implementation and our self-hosted OpenFaaS on CoolVDS.
| Metric | Public Cloud FaaS (eu-central) | CoolVDS + OpenFaaS (Oslo) |
|---|---|---|
| Cold Start | ~350ms | ~80ms (NVMe backed) |
| Network Latency (from Oslo) | 25-30ms | <3ms |
| Cost per 1M requests | Variable (Risk of spikes) | Fixed (Flat VPS rate) |
| Data Sovereignty | Questionable (US Cloud Act) | Strictly Norway |
The Storage Layer: Where FaaS Fails
Stateless is great, until you need state. Most FaaS tutorials tell you to use S3 or DynamoDB. But again, where does that data live?
On a VPS, you can run MinIO adjacent to your functions. MinIO offers an S3-compatible API but writes to the local NVMe disk. This keeps your architecture standard (using S3 protocols) but your data local.
# Deploy MinIO on K3s
helm repo add minio https://helm.min.io/
helm install private-storage minio/minio \
--set accessKey=myaccesskey \
--set secretKey=mysecretkey \
--set persistence.size=50Gi
Conclusion: Control is the New Scalability
The era of blindly trusting "The Cloud" to handle compliance and cost is ending. Between the Schrems II ruling and the increasing complexity of cloud billing, the pragmatic move for 2020 is to repatriate workloads where performance and privacy are paramount.
You don't need Amazon to do Serverless. You need code, a container runtime, and iron that doesn't quit. Whether you are building microservices for a fintech startup in Oslo or a high-traffic e-commerce site, the latency wins of local hosting combined with the flexibility of K3s is a competitive advantage you can measure in milliseconds.
Ready to build your own Iron Function platform? Don't let IOwait kill your performance. Deploy a high-frequency NVMe instance on CoolVDS today and experience the difference of raw, unthrottled KVM power.