Console Login

Beyond AWS Lambda: Building GDPR-Compliant Serverless Architectures on European VPS

Serverless Without the Cloud Tax: A Pragmatic Approach for 2022

The promise of serverless computing was supposed to be simple: focus on code, forget the infrastructure. But for any CTO operating in the European Economic Area (EEA) in 2022, the reality is far more complex. Between the unpredictable billing spikes of AWS Lambda and the legal minefield created by the Schrems II ruling, pushing sensitive customer data to US-owned hyperscalers is no longer the default "safe" choice.

If you are processing data for Norwegian users, Datatilsynet (The Norwegian Data Protection Authority) is watching. The solution isn't to abandon the event-driven serverless pattern—it's to repatriate it. By deploying a lightweight Function-as-a-Service (FaaS) framework on high-performance KVM instances, you gain three things: fixed costs, sub-millisecond I/O latency, and data sovereignty.

The Architecture: faasd over Kubernetes

While Kubernetes is the industry darling, running a full K8s cluster for a medium-sized workload is often overkill. It consumes resources just to manage itself. For a lean, high-throughput setup on a robust VPS, I recommend faasd. It's a provider for OpenFaaS that strips away Kubernetes and uses containerd directly.

This architecture requires underlying hardware that doesn't steal CPU cycles. When a function triggers, you need instant execution. This is where the "noisy neighbor" effect of budget shared hosting destroys performance. You need dedicated resources. At CoolVDS, we utilize KVM virtualization on NVMe storage specifically to eliminate the cold-start latency that plagues containerized workloads.

Step 1: The Foundation

First, secure a VPS running Ubuntu 20.04 LTS (or 22.04 if you've updated recently). Ensure you have at least 2 vCPUs and 4GB RAM for production workloads. Speed matters here.

Let's install faasd. This isn't just a Docker wrapper; it manages the networking and CNI plugins for you.

curl -sfL https://cli.openfaas.com | sudo sh

Next, we clone the repo and run the installation script. This sets up the systemd services and CNI networking.

git clone https://github.com/openfaas/faasd --depth=1
cd faasd
./hack/install.sh

Once installed, verify the services are active. If your disk I/O is slow, you will see timeouts here. On CoolVDS NVMe instances, this returns instantly.

sudo systemctl status faasd provider

Step 2: Defining the Function Stack

The beauty of this pattern is the declarative nature of your infrastructure. You define your functions in a YAML file. Here is a production-ready stack.yml configuration for an image processing service, a common use case for Norwegian media companies.

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080

functions:
  image-resizer:
    lang: python3-debian
    handler: ./image-resizer
    image: registry.coolvds-client.no/image-resizer:latest
    labels:
      com.openfaas.scale.zero: true
    environment:
      write_debug: true
      read_timeout: 20s
      write_timeout: 20s
    secrets:
      - s3-access-key
      - s3-secret-key
    limits:
      memory: 256Mi
      cpu: 100m # 10% of a vCPU core
    requests:
      memory: 128Mi
      cpu: 50m

Notice the limits and requests. In a multi-tenant cloud, these are billing caps. On your own VPS, these are Quality of Service (QoS) guarantees. You ensure one runaway function doesn't crash your entire node.

Step 3: Handling the "Cold Start" on Bare Metal

Cold starts occur when a container needs to spin up from zero to handle a request. In AWS Lambda, you have zero control over this (outside of paying for Provisioned Concurrency). On your own infrastructure, you can tune the kernel.

Pro Tip: Adjust your `sysctl.conf` to optimize for rapid container creation. Increasing the connection backlog is critical for burst traffic.

Add the following to /etc/sysctl.conf:

net.core.somaxconn = 4096

Then apply it:

sudo sysctl -p

Step 4: The Handler Logic

Your code needs to be idempotent. Here is a Python handler that connects to a local Redis instance (for state) and processes data. Using a local Redis over the loopback interface on a VPS provides significantly lower latency than connecting to a managed Redis instance over a VPC peering link.

import os
import redis
import json

# Connection pooling is essential even in serverless
pool = redis.ConnectionPool(host='127.0.0.1', port=6379, db=0)

def handle(req):
    r = redis.Redis(connection_pool=pool)
    
    try:
        payload = json.loads(req)
        user_id = payload.get("user_id")
        
        if not user_id:
            return (400, "Missing user_id")
            
        # Atomic increment for rate limiting logic
        current_usage = r.incr(f"usage:{user_id}")
        
        return (200, json.dumps({
            "status": "processed",
            "usage_count": current_usage,
            "node": os.getenv("HOSTNAME")
        }))
        
    except Exception as e:
        return (500, str(e))

Step 5: Securing the Gateway with Nginx

Never expose the OpenFaaS gateway (port 8080) directly to the internet. We need a reverse proxy to handle SSL termination and basic DDoS protection. If you are hosting in Norway, you want to ensure your TLS handshake happens as close to the user as possible.

Install Nginx:

sudo apt install nginx -y

Here is a hardened nginx.conf snippet that limits request rates to prevent abuse—a rudimentary but effective form of DDoS protection provided at the application layer.

http {
    limit_req_zone $binary_remote_addr zone=faas_limit:10m rate=10r/s;

    server {
        listen 80;
        server_name functions.your-domain.no;

        location / {
            limit_req zone=faas_limit burst=20 nodelay;
            
            proxy_pass http://127.0.0.1:8080;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            
            # Buffer tuning for large payloads
            client_max_body_size 50M;
            proxy_buffers 4 256k;
            proxy_buffer_size 128k;
        }
    }
}

The Latency Advantage

Why go through this trouble? Physics. If your users are in Oslo and your serverless functions are in us-east-1 (Virginia), you are fighting the speed of light (~90ms RTT). If you deploy on CoolVDS in a European datacenter, your latency to the Norwegian Internet Exchange (NIX) can be as low as 2-5ms.

Furthermore, standard VPS providers often oversell their CPU cores. If your neighbor on the physical host starts mining crypto, your function execution time doubles. We architect our KVM clusters to prevent CPU stealing, ensuring that your calculated execution time in your development environment matches production reality.

Conclusion

Serverless is an architectural pattern, not a billing model. By decoupling the pattern from the public cloud, you regain control over your data and your wallet. You satisfy GDPR requirements by keeping data on European soil, and you ensure consistent performance by running on dedicated NVMe resources.

Infrastructure is the bedrock of modern applications. Don't build your house on rented land that you can't control.

Ready to own your infrastructure? Deploy a high-performance NVMe VPS on CoolVDS today and start building your sovereign serverless stack.