Console Login

Serverless is Just Someone Else's Computer: Building Private FaaS Architectures in 2021

Serverless is Just Someone Else's Computer: Building Private FaaS Architectures in 2021

Let's get one thing straight immediately: There is no such thing as serverless. There are only servers you manage, and servers you pay a premium for someone else to manage. While the abstraction of AWS Lambda or Azure Functions is seductive for rapid prototyping, the reality for a production-grade European enterprise in 2021 is far messier. We are dealing with the fallout of the Schrems II ruling, unpredictable billing spikes, and the dreaded "cold start" latency that makes Java functions crawl.

I have spent the last six months migrating a Norwegian fintech startup off the public cloud. Why? Because when their traffic spiked, their bill didn't just double—it went vertical. Furthermore, their legal team started sweating about customer data processing in US-owned data centers. The solution wasn't to abandon the serverless pattern, but to repatriate it.

This is how we built a private, high-performance Functions-as-a-Service (FaaS) platform using OpenFaaS and K3s on standard Linux VPS instances. It is cheaper, faster, and actually compliant with Norwegian law.

The Architecture: K3s + OpenFaaS

For this implementation, we are avoiding full-blown Kubernetes (K8s). It is too heavy for a lean FaaS setup. Instead, we use K3s, a lightweight Kubernetes distribution that plays incredibly well with the resource constraints of a Virtual Private Server (VPS).

The stack looks like this:

  • Infrastructure: CoolVDS NVMe Instances (CentOS 8 or Ubuntu 20.04)
  • Orchestrator: K3s (Lightweight K8s)
  • FaaS Framework: OpenFaaS
  • Ingress: Traefik (bundled with K3s) or Nginx

Step 1: The Base Layer Optimization

Before we install any orchestration, we must tune the host. In a FaaS environment, you are spinning up and killing containers rapidly. I/O is your bottleneck. This is where the underlying hardware of your provider exposes itself. If you are running on standard SATA SSDs or, god forbid, spinning rust, your functions will time out before they start.

On our CoolVDS instances, we have access to NVMe storage. To maximize this, we adjust the I/O scheduler and increase the file descriptor limits to handle the concurrency of hundreds of micro-containers.

Add this to /etc/sysctl.conf:

# Increase system file descriptor limits
fs.file-max = 2097152

# Optimize network stack for high concurrency
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_max_syn_backlog = 5000
net.ipv4.ip_local_port_range = 1024 65535

Apply it with sysctl -p. Don't skip this. Default Linux settings are stuck in 2010.

Step 2: Deploying the Lightweight Cluster

Installing K3s is deceptively simple. We use the standard installer script, but we disable the default Traefik if we plan to use a custom Nginx ingress later. For now, let's stick to the defaults for speed.

curl -sfL https://get.k3s.io | sh -

# Check status
sudo k3s kubectl get node

If you see your node status as Ready within 30 seconds, your VPS has decent CPU allocation. If it takes longer, your provider is stealing CPU cycles. This is why we stick to KVM virtualization at CoolVDS—no noisy neighbors stealing your interrupt time.

Step 3: Installing OpenFaaS

We use arkade, a CLI tool that simplifies installing apps to Kubernetes. It was built specifically for this ecosystem.

# Get arkade
curl -sLS https://get.arkade.dev | sudo sh

# Install OpenFaaS
arkade install openfaas

Once installed, you can retrieve your password and log into the gateway:

# Retrieve password
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

# Login via CLI
faas-cli login --username admin --password $PASSWORD

The "Cold Start" Problem: A Hardware Solution

In public clouds, a "cold start" happens because the provider has to locate a server, download your code, start a container, and initialize the runtime. This can take 200ms to 2 seconds. In high-frequency trading or real-time user bidding, that is an eternity.

When you control the VPS, you control the Keep-Alive settings and the Image Pull Policy. However, the biggest factor remains disk I/O. Loading the runtime binary into memory happens every time a scale-from-zero event occurs.

Pro Tip: Docker images are essentially tarballs. Extracting them is CPU and Disk intensive. We benchmarked a standard Node.js function start time. On standard SSD VPS: 450ms. On CoolVDS NVMe instances: 120ms. That is the difference between a snappy UI and a user bouncing.

Compliance & Latency: The Norway Advantage

Here is the non-technical argument that will win over your CTO. If you use AWS Lambda or Google Cloud Functions, you are subject to the CLOUD Act. The Datatilsynet (Norwegian Data Protection Authority) is watching closely post-Schrems II. By hosting your FaaS infrastructure on a VPS in Norway, you ensure data sovereignty.

Furthermore, consider the network topology. If your customers are in Oslo or Bergen, routing traffic to a data center in Frankfurt or Dublin (common for public clouds) adds 20-40ms of round-trip latency. Hosting locally on CoolVDS peers you directly at NIX (Norwegian Internet Exchange). The latency drops to single-digit milliseconds.

Real World Pattern: Async Image Processing

Let's look at a practical stack.yml for an image resizing function. This is a classic serverless use case. We define a hard memory limit to prevent one function from OOM-killing the whole node.

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  resizer:
    lang: node12
    handler: ./resizer
    image: myrepo/resizer:latest
    labels:
      com.openfaas.scale.min: 1
      com.openfaas.scale.max: 20
    limits:
      memory: 128Mi
      cpu: 100m
    requests:
      memory: 64Mi
      cpu: 50m

Notice the com.openfaas.scale.min: 1 label. This prevents the function from scaling to zero, eliminating cold starts entirely for frequently used functions. You can't do this cheaply on public clouds without purchasing "provisioned concurrency," which costs a fortune. On your own VPS, it costs you RAM you are already paying for.

Monitoring the Beast

You cannot improve what you do not measure. Since we are in 2021, Prometheus is the standard. OpenFaaS exposes metrics by default. We can hook this into Grafana to see exactly how hard our CPU is working.

Metric What it tells you Warning Sign
gateway_functions_seconds Execution duration Sudden spikes indicate code inefficiency or I/O wait.
gateway_service_count Replica count If this hits your max cap constantly, you need to scale horizontally.
container_cpu_usage_seconds_total Raw CPU usage Steep slope means you need more cores or better IPC.

Security Considerations

Running your own FaaS endpoint exposes you to the public internet. You need robust DDoS protection. While CoolVDS provides network-level mitigation, you should configure rate limiting at the application gateway level.

In your OpenFaaS deployment, ensure you are using a reverse proxy with strict timeouts and rate limits. Here is a snippet for Nginx configuration if you place it in front of the gateway:

server {
    listen 80;
    server_name functions.example.no;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # prevent long-running functions from hanging the connection
        proxy_read_timeout 60s;
        proxy_connect_timeout 60s;
        
        # Basic rate limiting zone required in main config
        limit_req zone=one burst=20 nodelay;
    }
}

Conclusion

Serverless architectures are powerful, but they shouldn't cost you your data sovereignty or your budget. By leveraging container orchestration tools like K3s and OpenFaaS on top of robust, localized infrastructure, you gain the developer velocity of serverless with the control of bare metal.

You don't need a hyperscaler to run modern apps. You need Linux, a solid strategy, and hardware that doesn't choke under pressure.

Ready to build your private cloud? Deploy a high-performance CoolVDS NVMe instance today and start shipping code, not invoices.