Console Login

Serverless Without the Handcuffs: Building GDPR-Safe FaaS Architectures in a Post-Schrems II World

Serverless Without the Handcuffs: Building GDPR-Safe FaaS Architectures in a Post-Schrems II World

Let’s cut through the marketing noise. "Serverless" is a lie. There are always servers. The only difference is whether you control them, or if you're renting execution time by the millisecond from a giant US corporation that just had its legal shield shattered by the European Court of Justice.

I'm talking, of course, about the Schrems II ruling from July 2020. If you are a CTO or Lead Architect in Norway, you know the Privacy Shield is dead. Sending personal data to US-owned clouds (AWS Lambda, Azure Functions) is now a legal minefield. Datatilsynet isn't going to accept "but it scales automatically" as a valid excuse for non-compliance.

But we still want the architectural patterns. Event-driven triggers, scale-to-zero, and the decomposition of monolithic monstrosities into manageable functions are solid engineering principles. The solution? Repatriate your functions.

In this guide, I’m going to show you how to build a battle-hardened, self-hosted serverless platform using OpenFaaS and lightweight Kubernetes (K3s) on high-performance infrastructure. We get the developer experience of Lambda, but with the sub-millisecond I/O latency of local NVMe storage and full data sovereignty in Norway.

The Architecture: Why KVM + NVMe is Non-Negotiable

When you run your own FaaS (Functions as a Service) platform, the underlying hardware stops being an abstraction and starts being a bottleneck. The "Cold Start" problem in serverless is essentially an I/O problem. Your infrastructure needs to pull a container image, uncompress it, create the overlay filesystem, and boot the runtime.

If you try this on standard SATA SSDs or, god forbid, spinning rust, your functions will lag. I've seen API gateways time out waiting for a heavy Python container to hydrate on cheap VPS hosting. This is why at CoolVDS, we standardized on NVMe storage. The random read/write speeds are critical for container orchestration.

Pro Tip: Avoid OpenVZ or LXC containers for hosting Docker/Kubernetes. You will hit kernel capability issues (like `IP_VS` modules missing for kube-proxy). Always use KVM virtualization (Kernel-based Virtual Machine) which provides a dedicated kernel. CoolVDS instances provide this isolation by default.

Step 1: The Foundation (K3s on CoolVDS)

We aren't deploying a bloated vanilla Kubernetes cluster. For a lean FaaS setup, we use K3s (a certified lightweight Kubernetes distribution). It’s a single binary, uses half the memory, and is production-ready as of 2020.

Assuming you have a fresh CoolVDS instance running Ubuntu 20.04 LTS, here is the bootstrap process. We disable the default Traefik ingress because we want granular control over our ingress controller later.

# Install K3s without Traefik
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -

# Check node status
sudo k3s kubectl get nodes

# Expect output:
# NAME          STATUS   ROLES    AGE   VERSION
# coolvds-01    Ready    master   25s   v1.18.6+k3s1

Step 2: Deploying OpenFaaS

OpenFaaS is the standard for self-hosted serverless. It's cloud-agnostic and runs on top of Kubernetes. To keep things efficient, we'll use `arkade`, a tool built by the OpenFaaS community to manage helm charts without the headache.

# Get arkade
curl -sLS https://dl.get-arkade.dev | sudo sh

# Install OpenFaaS with basic auth enabled
arkade install openfaas --load-balancer

# Wait for the gateway to come up
kubectl rollout status -n openfaas deploy/gateway

Once deployed, you have a full FaaS platform. However, the default gateway exposes the UI publicly. In a production environment, specifically one handling Norwegian business data, we need to lock this down.

Step 3: Securing the Gateway with Nginx

Do not expose the OpenFaaS gateway directly to the internet. We place Nginx in front to handle SSL termination (Let's Encrypt) and IP allow-listing. This adds a negligible 1-2ms of latency if configured correctly on local loopback.

Here is a hardened `nginx.conf` snippet optimized for high-concurrency function invocation. Note the `keepalive` settings to prevent TCP exhaustion during traffic spikes—a common issue when webhooks flood your system.

upstream openfaas {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name functions.your-domain.no;

    location / {
        proxy_pass http://openfaas;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Performance Tuning for Long-running functions
        proxy_read_timeout 300s;
        proxy_connect_timeout 300s;
        
        # Buffer settings to handle large JSON payloads
        client_max_body_size 50M;
        proxy_buffers 8 16k;
        proxy_buffer_size 32k;
    }
}

Step 4: The Database Pattern (Connection Pooling)

A classic mistake in serverless architecture involves database connections. If you scale to 500 functions, and each function opens a connection to your MySQL database, you will hit the `max_connections` limit and your application will crash. This is the "Lambda RDS" problem.

Since we are on a VPS, we can run the database alongside the FaaS cluster or on a private network, but we must use connection pooling. If you are using PostgreSQL, deploy PgBouncer. If you are using MySQL, use ProxySQL.

Here is a configuration example for `pgbouncer.ini` to ensure your functions reuse connections efficiently:

[databases]
* = host=127.0.0.1 port=5432

[pgbouncer]
listen_port = 6432
listen_addr = 0.0.0.0
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
admin_users = postgres

# Critical for Serverless
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20

By setting `pool_mode = transaction`, functions can share connections seamlessly. This setup allows a single CoolVDS instance to handle thousands of concurrent requests without killing the database.

Performance Comparison: AWS Lambda vs. CoolVDS OpenFaaS

We ran a benchmark executing a simple image processing task (resize 2MB JPEG). We compared an AWS Lambda (eu-north-1) against a CoolVDS 4 vCPU / 8GB RAM NVMe instance running OpenFaaS.

Metric AWS Lambda (eu-north-1) CoolVDS (Oslo) + OpenFaaS
Cold Start ~250ms ~80ms (Local NVMe image pull)
Execution Latency Variable Consistent
Data Sovereignty US Owned (Schrems II Risk) 100% Norwegian
Cost Predictability Pay-per-invoke (Risk of spikes) Flat Monthly Rate

The Verdict

Serverless is an architectural pattern, not a billing model. You don't need Amazon to build event-driven systems. In fact, given the current legal climate in Europe, relying on them is becoming a liability.

By deploying OpenFaaS on CoolVDS, you regain control. You know exactly where your data lives (on a rack in Oslo), you know exactly what your bill will be at the end of the month, and thanks to KVM and NVMe, your performance is often superior to the public cloud.

Stop worrying about cold starts and legal compliance. Build your own platform.

Ready to take back control? Deploy a high-performance KVM instance on CoolVDS today and get your FaaS cluster running in under 5 minutes.