Serverless Without the Lock-in: Implementing Private FaaS for GDPR Compliance
It is March 2018. If you are a CTO or Lead Architect in Norway right now, you have two screaming voices in your head. One is the development team begging to refactor everything into AWS Lambda or Azure Functions because it’s the "future." The other is your legal counsel reminding you that the General Data Protection Regulation (GDPR) enforcement date—May 25th—is barely two months away.
The intersection of these two realities is a dangerous place. While the "Serverless" paradigm promises infinite scalability and pay-per-execution billing, it often hides a darker truth: aggressive vendor lock-in and murky data sovereignty.
When you deploy a function to a public cloud, do you know exactly which physical disk your customer's data touched? Can you guarantee that a specific sub-processor in a US jurisdiction didn't index it? For many Norwegian businesses answering to Datatilsynet, "probably" isn't a legally defensible answer.
There is a pragmatic alternative. You can have the developer experience of FaaS (Functions as a Service) without handing the keys to a hyperscaler. By building a Private Serverless Architecture using Docker Swarm and OpenFaaS on dedicated, high-performance VPS infrastructure, you gain total control over your data residency while keeping the DevOps agility intact.
The Architecture: Hybrid FaaS on Bare-Metal Performance
"Serverless" is a misnomer. There are always servers. The question is who manages them and how noisy the neighbors are. In a public cloud environment, your functions suffer from "cold starts"—the latency incurred when the provider spins up a container to handle a request. This can take anywhere from 100ms to 2 seconds.
In our private architecture, we eliminate this unpredictability. We will use CoolVDS instances as our compute nodes. Why? Because FaaS is I/O intensive. When a function triggers, it often needs to pull binaries, write temp files, or query a database immediately. Traditional spinning HDD VPS hosting cannot handle the random I/O spikes of a FaaS cluster. You need NVMe storage to keep queue times near zero.
The Stack
- Orchestrator: Docker Swarm (It is currently 2018; while Kubernetes 1.10 is out, Swarm remains far simpler for teams smaller than 50 people to manage without a dedicated Ops team).
- FaaS Framework: OpenFaaS (Serverless functions made simple).
- Infrastructure: 3x CoolVDS NVMe instances (1 Manager, 2 Workers) running Ubuntu 16.04 LTS.
- Gateway: Nginx (acting as the ingress controller).
Step 1: The Swarm Initialization
First, we need to cluster our instances. Latency between nodes is critical here. If you are serving Norwegian customers, ensure your CoolVDS instances are located in the Oslo datacenter to minimize the hop to NIX (Norwegian Internet Exchange).
On your primary node (Manager), initialize the Swarm:
# On the Manager Node (10.0.0.1)
root@coolvds-mgr:~# docker swarm init --advertise-addr 10.0.0.1
Swarm initialized: current node (dxn1...) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-49nj1cmql0... 10.0.0.1:2377
On your worker nodes, simply paste the token command. This creates a private mesh network where your functions can communicate securely, completely isolated from the public internet except for the API Gateway.
Step 2: Deploying OpenFaaS
Alex Ellis's OpenFaaS has matured significantly this year. It allows us to define serverless functions as Docker containers. This is the key to avoiding lock-in. If you write a Lambda, you are stuck on AWS. If you write a Docker container, you can run it on CoolVDS, on-premise, or any cloud.
We will clone the project and deploy the stack using the built-in deploy script which utilizes docker stack.
# Install the CLI first
root@coolvds-mgr:~# curl -sL https://cli.openfaas.com | sudo sh
# Clone the OpenFaaS repository
root@coolvds-mgr:~# git clone https://github.com/openfaas/faas && cd faas
# Initialize the swarm network
root@coolvds-mgr:~/faas# ./deploy_stack.sh
This script spins up the core components: the Gateway (router), the Prometheus instance (metrics), and the AlertManager (auto-scaling).
Pro Tip: By default, OpenFaaS might not set resource limits. In a production environment, you must edit the docker-compose.yml file to define memory reservations. Without this, a memory leak in one function could crash your entire CoolVDS node.
Step 3: Creating a GDPR-Safe Function
Let's create a function that processes user data. Because we are running this on our own servers in Norway, we can ensure the data never leaves the jurisdiction. We will use Python 3.
# Scaffold a new function
$ faas-cli new --lang python3 gdpr-processor
# Implementation in ./gdpr-processor/handler.py
def handle(req):
# This logic runs on YOUR server, not an opaque cloud container
return "Processed data securely in Oslo: " + req
Now, build and deploy. This is where the NVMe storage on CoolVDS shines. Building Docker images involves heavy read/write operations. On standard SSDs, this build might take 45 seconds. On NVMe, we typically see it complete in under 15 seconds.
$ faas-cli build -f gdpr-processor.yml
$ faas-cli deploy -f gdpr-processor.yml --gateway http://127.0.0.1:8080
Performance Tuning: The "Cold Start" Myth
In a private setup, you control the "keep-alive" settings. Unlike AWS, which kills your container after minutes of inactivity, you can configure your OpenFaaS stack to keep a warm pool of replicas.
However, scaling requires metrics. OpenFaaS uses Prometheus to watch the gateway_service_count metric. If traffic spikes, it scales up the replicas. To make this responsive, you need to tune the alert rules in Prometheus.
# prometheus/alert.rules
ALERT ServiceHighTraffic
IF rate(gateway_function_invocation_total[10s]) > 5
FOR 5s
LABELS { severity = "major", service = "gateway" }
ANNOTATIONS { ... }
A polling interval of 10 seconds is aggressive, but necessary for bursty workloads. Ensure your underlying VPS has high CPU steppings (dedicated cores) to handle the monitoring overhead without stealing cycles from the actual functions.
The Economic & Legal Argument
Why go through this trouble? Two reasons: TCO and Compliance.
- TCO (Total Cost of Ownership): Public cloud FaaS is cheap for low volume, but expensive at scale. If your application triggers 50 million executions a month, the bill will be astronomical. With CoolVDS, you pay a flat monthly fee for the infrastructure. If your code is efficient, 50 million executions cost exactly the same as 1 million.
- Compliance: When the auditor asks, "Where is the data?", you don't point to a cloud region map. You point to a specific rack in a specific datacenter in Norway.
Building your own serverless platform is not for everyone. It requires managing the Docker Swarm and monitoring the underlying OS. But for organizations that need to guarantee low latency to Norwegian users and strict adherence to the upcoming privacy laws, it is the only architecture that makes sense.
Don't let the cloud giants dictate your architecture or your compliance strategy. Take control of your stack.
Ready to build your private FaaS cluster? Deploy a high-performance, NVMe-backed instance on CoolVDS today and get your Docker Swarm running in under 2 minutes.