Console Login

Serverless Without the Vendor Tax: Implementing Self-Hosted FaaS Patterns on High-Performance VPS

The "Serverless" Lie: Why Your Architecture Needs Bare Metal Reality

Let’s get one thing straight immediately: Serverless is a billing model, not a magic spell.

The industry buzzwords—Lambda, Azure Functions, Cloud Functions—promise a utopia where you write code and infrastructure vanishes. But for those of us debugging latency spikes at 2 AM, the reality is different. "Serverless" on public cloud often means cold starts, opaque pricing structures, and routing your Norwegian users' data through a data center in Frankfurt or Ireland, adding 30-50ms of unnecessary latency.

If you are building for the Norwegian market, you have two adversaries: Latency and Datatilsynet (The Data Inspectorate). Public cloud FaaS (Function as a Service) acts as a black box. You don't know where the physical disk sits. You don't control the neighbor noisy enough to steal your CPU cycles.

The solution isn't to abandon the Serverless pattern—event-driven architecture is brilliant—but to abandon the platform. By running self-hosted FaaS on high-performance KVM VPS instances, you reclaim control, slash costs, and keep data within Norwegian borders.

The Architecture: OpenFaaS on Docker Swarm

In early 2019, Kubernetes is eating the world, but for a lean DevOps team, Docker Swarm remains the pragmatic choice for speed and simplicity. We will use OpenFaaS, a framework that lets you run serverless functions on your own hardware. This setup gives you the "scale-to-zero" efficiency without the "lock-in-forever" penalty.

Infrastructure Requirements

FaaS platforms are I/O vampires. They constantly pull Docker images, spin up containers, and write logs. If you attempt this on a standard HDD VPS, your system will choke. You need NVMe storage. Period.

Recommended Spec for Production Node:

  • CPU: 4 vCores (KVM virtualization to prevent steal time)
  • RAM: 8GB+ (Buffer for container density)
  • Storage: NVMe SSD (Crucial for image pull speed)
  • OS: Ubuntu 18.04 LTS or CentOS 7
Pro Tip: At CoolVDS, we specifically tune our KVM host kernels to prioritize I/O interrupts. This prevents the "stutter" you often see on budget VPS providers when multiple containers launch simultaneously.

Step 1: System Tuning for High Concurrency

Before installing Docker, we must prepare the kernel. FaaS generates thousands of short-lived connections. Default Linux settings will run out of file descriptors and hit connection tracking limits.

Edit /etc/sysctl.conf:

# Increase system-wide file descriptors
fs.file-max = 2097152

# Optimize TCP stack for short-lived connections
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_fin_timeout = 15
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535

# Increase port range for outbound connections
net.ipv4.ip_local_port_range = 1024 65535

Apply these with sysctl -p. If you skip this, your FaaS gateway will 502 under load.

Step 2: Deploying the Stack

Assuming you have Docker 18.09+ installed, initialize Swarm and deploy OpenFaaS. This architecture is portable; you can run it on a CoolVDS instance in Oslo today, and move it to a bare metal rack tomorrow without changing a line of application code.

# Initialize Swarm Manager
docker swarm init --advertise-addr $(hostname -i)

# Clone OpenFaaS
git clone https://github.com/openfaas/faas
cd faas

# Deploy the stack
./deploy_stack.sh

This script deploys the Gateway, NATS (for async queuing), and Prometheus (for auto-scaling metrics). Within 60 seconds, you have a functional FaaS platform.

Step 3: The "Norwegian" Function

Let’s write a function that actually solves a local problem: verifying Norwegian Organization Numbers (Brønnøysundregistrene) using the Modulo 11 algorithm. This is a classic CPU-bound task perfect for FaaS.

Create the function using the CLI:

faas-cli new --lang node10 norway-validator

Inside handler.js, we implement the logic. Note strictly the Node 10 syntax (Async/Await is fully supported now).

"use strict"

module.exports = async (event, context) => {
    const orgNum = event.body.toString();
    
    if (orgNum.length !== 9) {
        return context
            .status(400)
            .succeed({ valid: false, error: "Must be 9 digits" });
    }

    const weights = [3, 2, 7, 6, 5, 4, 3, 2];
    let sum = 0;

    for (let i = 0; i < 8; i++) {
        sum += parseInt(orgNum[i]) * weights[i];
    }

    const remainder = sum % 11;
    const checkDigit = 11 - remainder;
    const lastDigit = parseInt(orgNum[8]);

    if (checkDigit === lastDigit || (remainder === 0 && lastDigit === 0)) {
         return context.status(200).succeed({ valid: true });
    }

    return context.status(200).succeed({ valid: false });
}

Deploying this to your CoolVDS instance:

faas-cli up -f norway-validator.yml

Performance: Cloud vs. CoolVDS

Here is where the architecture decision pays off. We ran a benchmark (Apache Bench, 10,000 requests, concurrency 50) comparing a standard AWS Lambda function (eu-central-1) against this OpenFaaS setup on a CoolVDS Performance VPS (Oslo).

Metric Public Cloud FaaS (Frankfurt) CoolVDS + OpenFaaS (Oslo)
Cold Start ~350ms - 1200ms ~150ms (NVMe Advantage)
Network Latency (from Oslo) 35ms - 45ms < 2ms
Cost per 1M Reqs Variable ($$$) Fixed (Flat VPS rate)
Data Sovereignty Grey Area (US Cloud Act) Strictly Norway

The Storage Bottleneck

The single biggest failure point in self-hosted Serverless is disk I/O. When OpenFaaS scales from 1 replica to 50 replicas to handle a traffic spike, 50 Docker containers attempt to read binaries from the disk simultaneously.

On spinning rust (HDD) or shared SATA SSDs, the iowait metric will spike, and your API Gateway will timeout. This is why we insist on NVMe technology for the underlying infrastructure. It provides the IOPS necessary to feed the Docker daemon without queuing.

Securing the Payload

Since we are operating in 2019, GDPR is the reality we live in. You cannot send personal data over HTTP. Terminating SSL inside the container is inefficient. Use Nginx on the host as a reverse proxy.

Nginx Configuration Snippet (Strict Security):

server {
    listen 443 ssl http2;
    server_name faas.yourdomain.no;

    ssl_certificate /etc/letsencrypt/live/faas.yourdomain.no/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/faas.yourdomain.no/privkey.pem;
    
    # Modern SSL settings (2019 Best Practice)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Buffer tuning for JSON payloads
        proxy_buffers 8 16k;
        proxy_buffer_size 32k;
    }
}

Conclusion: Take Back Control

Serverless is a powerful architectural pattern, but coupling it with a specific vendor's billing model is a strategic error. By deploying OpenFaaS on CoolVDS, you gain the agility of event-driven code with the raw power and predictability of dedicated hardware.

Your data stays in Norway. Your latency stays low. And when your CFO asks about the cloud bill, you won't have to explain why a background worker cost $500 last month.

Ready to build? Don't let slow I/O kill your function performance. Deploy a high-performance NVMe instance on CoolVDS today and get your FaaS cluster running in minutes.