Console Login

Escaping the Lambda Trap: High-Performance Serverless Patterns on Self-Hosted K8s

Escaping the Lambda Trap: High-Performance Serverless Patterns on Self-Hosted K8s

There is a massive misconception in our industry that "Serverless" equals "AWS Lambda" or "Azure Functions." It doesn't. Serverless is an operational model, not a vendor product. And for those of us operating critical infrastructure in Norway, the public cloud implementation of serverless is often a trap disguised as convenience.

I recently audited a fintech setup in Oslo where the development team had gone all-in on public cloud functions. They were hitting cold starts of 800ms. Add the round-trip latency to Frankfurt or Dublin, and their "instant" payment verification was taking nearly two seconds. That is unacceptable. Furthermore, they were bleeding money on execution time for idle-waiting processes.

The pragmatic solution isn't to abandon the event-driven architecture. It's to own the underlying metal. By deploying a lightweight Kubernetes distribution (like K3s) on high-performance NVMe VPS instances, we can build a serverless platform that is faster, cheaper, and strictly compliant with Norwegian data sovereignty requirements.

The Architecture: FaaS on K8s

The pattern we are deploying is Function-as-a-Service (FaaS) on Kubernetes. We replace the opaque cloud provider with a transparent stack: Linux > K3s > OpenFaaS. This gives you the developer experience of "git push to deploy" without the vendor lock-in or the noisy neighbor performance penalties.

Pro Tip: When running container orchestration on a VPS, the bottleneck is almost always Disk I/O, not CPU. The constant pulling of images and overlay filesystem operations will crush a standard HDD or SATA SSD. This is why we standardize on CoolVDS instances—their direct-attached NVMe storage ensures etcd latency stays under 2ms, which is critical for cluster stability.

Phase 1: Kernel Tuning for High Concurrency

Before installing any orchestration tools, you must prep the OS. A standard Linux distribution is tuned for desktop or generic server usage, not for routing thousands of ephemeral function calls. If you skip this, your gateway will choke on file descriptors.

On your CoolVDS instance (running Ubuntu 24.04 LTS or Debian 12), apply these sysctl optimizations:

# /etc/sysctl.d/99-serverless.conf

# Increase max open files for high concurrency
fs.file-max = 2097152

# Optimize network stack for short-lived connections (common in FaaS)
net.ipv4.tcp_max_syn_backlog = 4096
net.core.somaxconn = 4096
net.ipv4.tcp_tw_reuse = 1

# Increase virtual memory areas for high-density container packing
vm.max_map_count = 262144
vm.swappiness = 1

Load these settings immediately:

sysctl -p /etc/sysctl.d/99-serverless.conf

Phase 2: The Lightweight Orchestrator

We don't need the bloat of a full K8s distribution like `kubeadm` for a functional cluster. K3s is a certified Kubernetes distribution designed for production workloads with a tiny memory footprint. It strips out legacy cloud provider storage drivers, leaving us with lean muscle.

Deploying K3s on a CoolVDS node takes roughly 30 seconds:

curl -sfL https://get.k3s.io | sh -s - server \
  --disable traefik \
  --write-kubeconfig-mode 644

We disable Traefik here because we want precise control over our Ingress via Nginx or the OpenFaaS gateway directly later on.

Phase 3: Deploying OpenFaaS

OpenFaaS (Function as a Service) is the engine. It sits on top of K3s and manages the lifecycle of your functions. We will use `arkade`, a package manager for Kubernetes, to install it cleanly.

# Install arkade
curl -sLS https://get.arkade.dev | sudo sh

# Install OpenFaaS
arkade install openfaas \
  --load-balancer 
  --set gateway.directFunctions=true \
  --set faasIdler.dryRun=false

The gateway.directFunctions=true flag is crucial here. It bypasses the provider middleware for internal calls, reducing latency by milliseconds—which adds up when you have chained functions.

The Data Sovereignty Advantage (Schrems II & GDPR)

Technological superiority isn't the only reason to host this in Norway. Legal compliance is a minefield. Under Schrems II rulings and subsequent GDPR interpretations, relying on US-owned cloud providers involves complex Transfer Impact Assessments (TIAs).

By hosting your serverless architecture on a Norwegian VPS provider like CoolVDS, you drastically simplify your compliance posture. The data resides on physical disks in Oslo or nearby regions, governed by Norwegian law. You aren't just selling speed; you are selling Datatilsynet-friendly architecture.

Comparing the Cost & Performance

Let's look at the numbers. A typical high-traffic event processing workload (e.g., webhook handling for an e-commerce platform) might consume 20 million invocations per month.

Metric Public Cloud FaaS Self-Hosted (CoolVDS)
Cold Start 200ms - 1000ms < 50ms (Hot Pods)
Cost Linear Increase ($$$) Flat Rate ($)
Execution Limit 15 minutes (typically) Unlimited
Disk I/O Throttled Network Store Local NVMe

Securing the Gateway

Never expose the OpenFaaS gateway directly to the internet without a reverse proxy. We use Nginx to handle SSL termination and rate limiting. This is where the "Battle-Hardened" part comes in—we assume everyone is trying to DDoS us.

# /etc/nginx/conf.d/openfaas.conf

upstream openfaas {
    server 127.0.0.1:8080;
}

server {
    listen 80;
    listen [::]:80;
    server_name functions.your-domain.no;

    # Redirect all HTTP to HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name functions.your-domain.no;

    ssl_certificate /etc/letsencrypt/live/your-domain.no/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your-domain.no/privkey.pem;

    # Aggressive Rate Limiting
    limit_req zone=one burst=20 nodelay;

    location / {
        proxy_pass http://openfaas;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Buffer settings for larger payloads
        client_max_body_size 50M;
        proxy_buffers 8 16k;
        proxy_buffer_size 32k;
    }
}

Notice the proxy_buffers directives. Default Nginx settings often buffer to disk if the payload is slightly too large, which kills performance. On a CoolVDS NVMe instance, disk buffering is fast, but RAM is always faster. We tune these to keep payloads in memory.

Conclusion

Serverless architecture is brilliant. The billing model attached to it by hyperscalers is not. For teams in the Nordics, building your own event-driven platform on high-performance infrastructure offers the best of both worlds: the developer velocity of FaaS and the raw power of bare metal.

Don't let latency and legal gray areas dictate your architecture. Spin up a CoolVDS instance today, install K3s, and take back control of your stack.