Console Login

Serverless Without the Lock-in: Self-Hosted FaaS Patterns for the GDPR Era

Serverless Without the Lock-in: Self-Hosted FaaS Patterns for the GDPR Era

It is April 2018. We are exactly one month away from the GDPR enforcement date (May 25th), and the panic in European tech circles is palpable. While developers are obsessing over AWS Lambda and the promise of "zero ops," Systems Architects and CTOs in Oslo and Bergen are looking at two very different metrics: Data Sovereignty and Cold Start Latency.

The serverless hype train suggests that managing your own servers is dead. They want you to believe that uploading a zip file to a black box in us-east-1 is the future. But when your Norwegian e-commerce site takes 3 seconds to render because a function went cold, or when Datatilsynet (the Norwegian Data Protection Authority) asks exactly which physical drive your customer's data touched, "serverless" becomes a liability.

This isn't an argument against the architecture—event-driven functions are brilliant. It is an argument against the implementation. The solution isn't to abandon serverless; it's to bring it home using High-Performance VPS and Open Source FaaS (Function as a Service) frameworks.

The Architecture: The "Sidecar" FaaS Pattern

In a standard monolithic migration, you don't rewrite everything at once. You strangle the monolith. The most robust pattern we are seeing in 2018 is the Sidecar FaaS approach.

Your main application (perhaps a PHP 7.1/Laravel app or a Java Spring service) remains on your primary KVM instance. You then deploy a lightweight FaaS cluster (using OpenFaaS or Kubeless) on adjacent VPS nodes to handle asynchronous, bursty tasks like image resizing, PDF generation, or webhook processing.

Why split it? Isolation and Resource Guarding.

If you run image processing on your main web server, a sudden influx of uploads can starve your Nginx processes of CPU cycles. By offloading this to a FaaS cluster on a separate CoolVDS instance, you protect the core application's latency.

Deploying OpenFaaS on Docker Swarm (2018 Standard)

While Kubernetes (now at v1.10) is gaining traction, Docker Swarm remains the pragmatist's choice for teams smaller than 20 people. It is simple, declarative, and robust. Here is how we deploy a FaaS framework on a fresh CoolVDS instance running Ubuntu 16.04 LTS.

First, we initialize the swarm and deploy the stack. Note: You need Docker 17.12+ or 18.03 (CE) for this to work smoothly.

# Initialize Swarm on the manager node
docker swarm init --advertise-addr $(hostname -i)

# Clone the OpenFaaS stack
git clone https://github.com/openfaas/faas
cd faas

# Deploy the stack
./deploy_stack.sh

This command spins up the Gateway, NATS (for async queueing), and Prometheus (for metrics). The beauty of running this on CoolVDS is the storage backend. OpenFaaS relies heavily on Docker image layers. When a function is invoked, the container must spin up instantly. On standard spinning rust (HDD) or even cheap SATA SSDs, the I/O wait time can introduce 500ms+ of latency.

CoolVDS instances use NVMe storage. This reduces the disk read time for container layers to negligible levels, making your "cold starts" feel warm.

Configuring the Function Stack

Here is a sample stack.yml for a Node.js 8 function that processes GDPR deletion requests. Notice the environment variables—we keep everything local.

provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  gdpr-cleanup:
    lang: node8
    handler: ./gdpr-cleanup
    image: registry.coolvds-internal.no/gdpr-cleanup:latest
    environment:
      mysql_host: "10.0.0.5" # Internal Private Network IP
      write_debug: true
    secrets:
      - mysql_password

The "Warm Pool" Optimization

A major issue with public cloud functions is that you have zero control over when they kill your container. On your own VPS, you control the lifecycle. We can optimize the "watchdog" process in OpenFaaS to keep functions alive longer.

Inside your function's dockerfile, or by passing an environment variable, you can tune the fprocess timeout. But the real performance gain comes from the underlying kernel tuning on the host node.

For a high-throughput FaaS node, you must optimize the Linux kernel to handle thousands of short-lived TCP connections. Default Linux settings are too conservative for this workload.

Edit your /etc/sysctl.conf on the CoolVDS host:

# Allow reuse of sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1

# Increase the local port range to avoid exhaustion during bursts
net.ipv4.ip_local_port_range = 1024 65000

# Increase the maximum number of open files (vital for Docker/container heavy loads)
fs.file-max = 100000

# Swappiness 1 to prefer RAM over swap (NVMe is fast, RAM is faster)
vm.swappiness = 1

Apply these with sysctl -p. These settings are crucial. I have seen FaaS clusters choke not on CPU, but because they ran out of file descriptors or ephemeral ports.

Latency: Oslo vs. Frankfurt

Let's talk about physics. Light travels at a finite speed. If your users are in Norway, and your API Gateway triggers a function hosted in AWS eu-central-1 (Frankfurt) or eu-west-1 (Ireland), you are eating a 30-50ms round-trip tax on every request before processing even begins.

By hosting on a VPS in Norway, that network latency drops to <5ms for local users. For a chain of microservices where Service A calls Service B calls Service C, this latency compounds. 50ms becomes 150ms. 5ms stays 15ms.

Pro Tip: Use the `time` command combined with `curl` to benchmark your current endpoint latency from a local terminal:

curl -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total: %{time_total}\n" -o /dev/null -s https://api.yoursite.com/function

Data Integrity and The "Noisy Neighbor" Problem

In 2018, the Meltdown and Spectre CPU vulnerabilities are fresh in our minds. Public cloud providers have patched their hypervisors, but the performance penalty on shared tenancy can be significant—sometimes up to 20-30% on specific syscall-heavy workloads (like Docker container spawning).

At CoolVDS, we use KVM (Kernel-based Virtual Machine) with strict resource isolation. Unlike OpenVZ or container-based VPS solutions where the kernel is shared, KVM provides a higher degree of isolation. This is critical for FaaS workloads where you are executing untrusted code or processing sensitive GDPR data. You need to know that your CPU cycles are yours, and that a crypto-miner on the neighboring VM isn't stealing your I/O.

Database Connectivity in FaaS

A common mistake in serverless patterns is opening a new database connection for every function invocation. This will kill your MySQL server instantly under load. Since we are running our own infrastructure, we can implement connection pooling at the node level, or design our functions to reuse the context if the container is warm.

However, if you must connect directly, ensure your MySQL configuration (my.cnf) is ready for high concurrency. Raise the max_connections and adjust the thread cache:

[mysqld]
max_connections = 500
thread_cache_size = 50
# Ensure you are using InnoDB for row-level locking
default-storage-engine = InnoDB
innodb_buffer_pool_size = 2G # Adjust based on your CoolVDS RAM plan
Feature Public Cloud FaaS (AWS/Azure) Self-Hosted FaaS (CoolVDS)
Data Location Opaque (Region based) Strictly Norway (GDPR Compliant)
Cold Start Unpredictable (100ms - 2s) Tunable / Always Warm
Cost Model Per invocation (Hard to predict) Flat Monthly Rate (Predictable)
Execution Time Limit Strict limits (e.g. 5 min) Unlimited

Conclusion: Take Back Control

The rush to serverless is justified by the developer experience, not the deployment target. You can have the developer velocity of FaaS—git push to deploy, small decoupled functions, easy scaling—without handing over your data sovereignty and performance budget to a US tech giant.

With the GDPR deadline weeks away, moving your processing logic to a Norwegian jurisdiction is not just a technical optimization; it is a legal safeguard. Building a self-hosted FaaS cluster on CoolVDS gives you the raw NVMe performance required for Docker, the low latency of a local network, and the peace of mind that comes with total system control.

Ready to build your private serverless cluster? Deploy a high-performance KVM instance on CoolVDS today and start your OpenFaaS swarm in under 55 seconds.