Console Login

Serverless Patterns in 2018: Building High-Performance FaaS with OpenFaaS and KVM

Serverless Patterns: Escaping the Public Cloud Hype with Self-Hosted FaaS

"No Ops." That is the lie we were sold when AWS Lambda started gaining traction a few years ago. The marketing brochures promise a utopia where you just write code, push a button, and the infrastructure magically scales to infinity without you ever touching a terminal. If you are reading this, you probably know the reality is messier.

I have spent the last six months migrating a microservices workload from a rigid monolithic setup to a function-based architecture. We tried the big public clouds. It worked fine for "Hello World." Then traffic hit. We saw cold starts averaging 800ms. We saw API Gateway timeouts. We saw a billing statement that made our CFO ask if we were mining cryptocurrency. Serverless is a powerful paradigm, but renting it by the millisecond is not always the right architectural choice.

In this post, we are going to look at the pragmatic alternative: Self-Hosted Serverless. We will use OpenFaaS on top of Docker Swarm (or Kubernetes 1.12 if you are feeling brave) running on high-performance KVM instances. This gives you the developer experience of FaaS (Functions as a Service) with the raw I/O performance and cost predictability of a dedicated VPS.

The Problem with Public Cloud FaaS in 2018

When you rely on a managed FaaS provider, you are contending for resources. Your code runs in a container that has to be spun up on demand. If your function hasn't run in the last 15 minutes, the provider spins it down. The next request triggers a "cold start." In a recent e-commerce project targeting the Nordic market, a 1-second cold start on a checkout calculation resulted in a measurable drop in conversion rates.

Furthermore, there is the issue of Data Sovereignty. Since GDPR came into full enforcement this past May (2018), relying on US-owned cloud stacks has become legally complex. With the CLOUD Act recently passed in the US, Norwegian companies are rightly nervous about Datatilsynet (The Norwegian Data Protection Authority) auditing their data flows. Hosting your functions on a VPS in Oslo ensures your data physically stays within the jurisdiction you expect.

The Architecture: OpenFaaS on CoolVDS

We choose OpenFaaS because it is container-native. It packages your functions as Docker containers. This means if you can containerize it, it's a serverless function. No 50MB zip file limits, no proprietary runtimes.

To back this, we need infrastructure that doesn't steal CPU cycles. This is why we deploy on CoolVDS. We utilize KVM virtualization, which guarantees kernel isolation, and crucially, NVMe storage. When OpenFaaS pulls a Docker image to launch a function, disk I/O is the bottleneck. Spinning rust (HDD) or even standard SATA SSDs will choke your invocation times. NVMe is non-negotiable for FaaS.

Step 1: The Foundation

Start with a fresh Ubuntu 18.04 LTS instance. We need to tune the kernel for high concurrency before we install Docker. FaaS generates a lot of short-lived connections.

# /etc/sysctl.conf

# Increase max open files
fs.file-max = 2097152

# Increase connection tracking
net.netfilter.nf_conntrack_max = 1048576

# Optimize TCP stack for low latency
net.core.somaxconn = 65535
net.ipv4.tcp_max_tw_buckets = 1440000

Apply these changes with sysctl -p. If you skip this, your API Gateway will throw 502 errors under heavy load.

Step 2: Deploying the Stack

We will use Docker Swarm for simplicity. It is robust, built into Docker CE 18.09, and requires less overhead than Kubernetes.

# Initialize Swarm
docker swarm init

# Clone OpenFaaS
git clone https://github.com/openfaas/faas
cd faas

# Deploy the stack
./deploy_stack.sh

Once deployed, you have a Gateway, Prometheus for metrics, and AlertManager for auto-scaling. The beauty of running this on CoolVDS is the network proximity. If you are serving users in Oslo or Stavanger, the round-trip time (RTT) to a local VPS is often under 5ms. Compare that to routing to a generic "EU-North" datacenter that might actually be in Ireland or Frankfurt.

Step 3: Creating a Function

Let's create a simple image resizing function—a classic use case. We use the OpenFaaS CLI.

# Install CLI
curl -sL https://cli.openfaas.com | sudo sh

# Create a Python function
faas-cli new --lang python3 image-resize

This generates a handler.py. We can now implement our logic using Pillow (PIL). Crucially, because we are on a VPS, we can allocate specific resource limits in the image-resize.yml file to prevent one function from eating all our RAM.

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  image-resize:
    lang: python3
    handler: ./image-resize
    image: my-repo/image-resize:latest
    environment:
      write_debug: true
    limits:
      memory: 128m
    requests:
      memory: 64m

Performance: NVMe vs The World

The secret killer of serverless performance is I/O wait. When a function scales from 1 replica to 20, the host needs to load libraries and write logs simultaneously. On a shared cloud instance with "burstable" IOPS, your system hangs. This is "noisy neighbor" syndrome.

Pro Tip: Check your disk scheduler. On CoolVDS NVMe instances, we recommend setting the scheduler to none or noop inside your VM, letting the NVMe controller handle the queues naturally.
echo none > /sys/block/vda/queue/scheduler

Here is a simplified comparison based on our internal benchmarks running a Node.js prime number calculator:

Metric Public Cloud FaaS CoolVDS (OpenFaaS + NVMe)
Cold Start 200ms - 1500ms (Variable) < 100ms (Consistent)
Execution Cost Per 100ms Fixed Monthly Rate
Data Location Opaque (Region Level) Oslo, Norway (Datacenter Level)
Timeout Limit Usually 5 mins Unlimited

Security and Compliance in a Post-GDPR World

We cannot ignore the legal landscape. Since May 25th, strict strict data processing agreements (DPA) are mandatory. When you use a proprietary cloud function, you are often subject to opaque sub-processor lists. By hosting OpenFaaS on a Linux VPS, you control the entire stack.

You can implement strict firewall rules using ufw or iptables to ensure your internal function gateway is never exposed to the public internet, only accessible via your reverse proxy (Nginx). You can mount encrypted volumes for sensitive temporary data using LUKS.

Conclusion

Serverless is an architecture, not a billing model. You do not need to pay a premium for "management" that essentially amounts to automated container orchestration you could run yourself.

For high-performance, latency-sensitive applications in Norway, the math is simple. You need fast disk I/O, you need predictable latency, and you need data sovereignty. A CoolVDS KVM instance running OpenFaaS gives you the agility of serverless with the power of bare metal.

Don't let cold starts kill your user experience. Deploy a high-performance NVMe instance on CoolVDS today and build a serverless stack that you actually own.