Console Login

Serverless Without the Lock-in: Implementing OpenFaaS on NVMe VPS in Post-Schrems II Europe

Serverless Without the Lock-in: Implementing OpenFaaS on NVMe VPS in Post-Schrems II Europe

Let’s cut through the marketing noise. "Serverless" is a misnomer that usually translates to "someone else's computer, at a 400% markup, with unpredictable latency." If you are running serious workloads in 2020, specifically here in the Nordics, relying entirely on AWS Lambda or Azure Functions is becoming a liability—both financially and legally.

With the recent Schrems II ruling invalidating the Privacy Shield agreement just last month (July 2020), every CTO and System Administrator in Norway should be sweating looking at their data flow diagrams. If your "serverless" functions are piping Norwegian user data through US-controlled hyperscalers, you are now operating in a legal grey zone that Datatilsynet (The Norwegian Data Protection Authority) will likely frown upon.

There is a better way. You can have the event-driven, auto-scaling developer experience of serverless without the vendor lock-in or the data sovereignty headaches. The answer lies in running OpenFaaS on high-performance, KVM-based VDS infrastructure.

The Architecture: Why Bare-Metal Performance Matters for Functions

In a public cloud serverless environment, you are fighting for CPU cycles in a massive multi-tenant pool. This leads to the infamous "cold start" problem, where a function takes seconds to spin up. When you control the VDS, you control the resource allocation. However, this only works if your underlying storage I/O can keep up with the rapid creation and destruction of containers.

This is where standard spinning disks fail. For a self-hosted serverless stack, NVMe storage is not a luxury; it is a requirement. When OpenFaaS scales from 1 to 50 replicas in response to a traffic spike, the Docker daemon hammers the storage subsystem reading layers. If you are on standard SATA SSDs (or worse, HDD), your I/O wait times will skyrocket, and your API gateway will time out.

Step 1: Kernel Tuning for High Concurrency

Before we touch Docker, we need to prep the Linux kernel. Most default distros (CentOS 7 or Ubuntu 18.04/20.04) are tuned for long-running processes, not thousands of ephemeral containers. We need to widen the port range and enable faster socket recycling.

Add the following to your /etc/sysctl.conf:

# Increase system file descriptor limit
fs.file-max = 2097152

# Widen the port range for high connection rates
net.ipv4.ip_local_port_range = 1024 65535

# Enable fast recycling of TIME_WAIT sockets
net.ipv4.tcp_tw_reuse = 1

# Increase the max number of backlog connections
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535

# Increase max map count for databases (essential for ELK or generic DBs)
vm.max_map_count = 262144

Apply these changes with sysctl -p. If you skip this, your CoolVDS instance has the raw power, but the kernel will choke on file descriptors during a DDoS attack or a legitimate viral spike.

Step 2: The Stack – Docker Swarm & OpenFaaS

While Kubernetes (k8s) is the industry darling, for a single high-performance VDS or a small cluster, Docker Swarm remains superior in 2020 for its simplicity and lower overhead. We don't need the complexity of `etcd` management for this setup. We need raw throughput.

Initialize Swarm on your primary node:

docker swarm init --advertise-addr $(hostname -i)

Now, we deploy OpenFaaS. Clone the official repository and deploy the stack. We are using the 2020 stable release standards.

git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh

This installs the Gateway, the Prometheus monitoring stack, and the NATS queue worker. The beauty of this setup on CoolVDS is network proximity. If your users are in Oslo and your VDS is peering at NIX (Norwegian Internet Exchange), the round-trip time (RTT) is negligible compared to routing to AWS Frankfurt or Ireland.

Step 3: The API Gateway Configuration

The default OpenFaaS gateway is robust, but for production, you should front it with Nginx. This allows you to handle SSL termination and aggressive caching before the request even hits the function logic. This saves CPU cycles for actual processing.

Here is a battle-tested Nginx configuration snippet for handling function proxies:

upstream openfaas {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name functions.your-domain.no;

    location / {
        proxy_pass http://openfaas;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Aggressive timeouts for fail-fast behavior
        proxy_read_timeout 10s;
        proxy_send_timeout 10s;

        # Buffer settings to handle JSON payloads
        proxy_buffers 16 32k;
        proxy_buffer_size 64k;
    }
}
Pro Tip: Monitor your iowait metrics during load testing. If you see spikes above 10% while pulling images, your storage is the bottleneck. CoolVDS instances utilize enterprise-grade NVMe drives specifically to eliminate this I/O contention during container extraction.

Comparison: Hyperscalers vs. CoolVDS Self-Hosted

Feature Public Cloud (AWS/Azure) Self-Hosted (CoolVDS)
Cost Structure Per invocation (unpredictable) Flat monthly rate (predictable)
Data Sovereignty US Jurisdiction (Schrems II Risk) 100% Norwegian/European
Cold Starts Variable (Vendor controlled) Tunable (Keep-warm strategies)
Hardware Access Hidden/Throttled Direct Kernel/KVM Access

Handling the "Noisy Neighbor" Myth

Skeptics will argue that VPS environments suffer from "noisy neighbors"—other tenants stealing your CPU time. This was true in the era of OpenVZ and cheap oversight.

However, modern KVM virtualization, which we utilize strictly at CoolVDS, provides hardware-assisted isolation. When you reserve 4 vCPUs, the hypervisor scheduler ensures those cycles are yours. Combined with NVMe storage that offers hundreds of thousands of IOPS, the "neighbor" effect is statistically non-existent for 99% of workloads. You get the raw power of bare metal with the flexibility of virtualization.

Deploying a Python Function

Let's verify the stack. We will deploy a simple Python function using the faas-cli.

# Install CLI
curl -sL https://cli.openfaas.com | sudo sh

# Create a new function
faas-cli new --lang python3 fast-api

# Build, Push, and Deploy (ensure you have Docker Hub or local registry set up)
faas-cli up -f fast-api.yml

If your VDS is properly configured, the build process should be nearly instantaneous thanks to the NVMe caching of the base python layers.

Conclusion: Take Back Control

The allure of serverless is real, but the implementation details matter. By August 2020, relying on US-based cloud providers for core European infrastructure involves legal risks that didn't exist a few years ago. Furthermore, the cost of "infinite scale" often outweighs the benefits for predictable business workloads.

Building your own serverless platform on CoolVDS gives you the best of both worlds: the developer velocity of functions-as-a-service and the economic and legal security of owned infrastructure. Don't let latency or legislation slow you down.

Ready to build? Deploy a high-performance KVM instance on CoolVDS today and get your private serverless cloud running in under 5 minutes.