Console Login

Serverless Without the Lock-in: High-Performance FaaS Patterns on Bare-Metal VPS

Serverless Without the Lock-in: High-Performance FaaS Patterns on Bare-Metal VPS

Let’s get one thing straight immediately: Serverless is a lie. There are always servers. The only variable is whether you control them, or if you're renting time on a black box that goes to sleep right when your users need it most.

It is late 2019. The hype cycle for "Functions as a Service" (FaaS) has peaked. We've all seen the flashy demos where a credit card transaction triggers a cascade of lambdas. But I have also seen the other side: API gateways timing out because a function took 3 seconds to "warm up," and monthly bills that skyrocket because of inefficient memory allocation on public clouds.

If you are a serious engineer operating in the Nordic market, you cannot afford 500ms of latency just because your cloud provider's nearest region is in Frankfurt or Dublin. You need your logic running close to the metal, and close to your users in Oslo. Here is how we build a robust, self-hosted serverless architecture using OpenFaaS on high-performance KVM instances.

The Latency Killer: Why Public Cloud FaaS Fails High-Load

Public cloud FaaS introduces the "Cold Start" problem. When your code hasn't run in a few minutes, the provider spins down the container. The next request waits for the container to provision, the runtime to boot, and the code to load. For a Node.js microservice, this can take hundreds of milliseconds. for Java, seconds.

Furthermore, you are dealing with the noisy neighbor effect. In a massive public cloud, you don't know who is stealing CPU cycles on the same hypervisor. This makes performance unpredictable.

The Solution: Run your own FaaS platform on dedicated KVM slices. By using CoolVDS instances with NVMe storage, we eliminate the I/O bottleneck that plagues container startup times. We keep our containers warm, and we control the resource limits.

Architecture Pattern: The Async Event Mesh

The most resilient serverless pattern isn't just "HTTP Request -> Function." It's event-driven. We use NATS (a lightweight messaging system) to decouple the ingestion from the processing.

The Stack

  • Infrastructure: CoolVDS KVM VPS (Ubuntu 18.04 LTS)
  • Orchestrator: Kubernetes (k3s for smaller clusters or standard K8s for scale)
  • FaaS Framework: OpenFaaS
  • Message Bus: NATS Streaming

Step 1: Preparing the Metal

Before installing Kubernetes, we need to tune the Linux kernel. Standard distributions are optimized for desktop use, not high-throughput packet switching. On your CoolVDS instance, edit /etc/sysctl.conf. We need to increase the backlog and open file limits to handle the bursty nature of serverless traffic.

# /etc/sysctl.conf

# Increase system file descriptor limit
fs.file-max = 2097152

# Increase the read/write buffer sizes for network connections
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824

# Increase number of incoming connections
net.core.somaxconn = 65535

# Increase number of incoming packets
net.core.netdev_max_backlog = 50000

Apply these changes with sysctl -p. If you skip this, your NATS queue will choke under load, regardless of how fast your CPU is.

Step 2: Deploying OpenFaaS on Kubernetes

Assuming you have a Kubernetes cluster running on your VPS (we recommend using kubeadm on CoolVDS for full control), deploying OpenFaaS is straightforward using Helm. This gives us the gateway, queue-worker, and Prometheus for auto-scaling.

# Create namespaces
kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml

# Add the OpenFaaS helm repo
helm repo add openfaas https://openfaas.github.io/faas-netes/

# Update the repo
helm repo update

# Generate a random password
export PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)

# Install OpenFaaS
helm upgrade openfaas --install openfaas/openfaas \
    --namespace openfaas  \
    --set basic_auth=true \
    --set functionNamespace=openfaas-fn \
    --set basic_auth_password=$PASSWORD
Pro Tip: On CoolVDS, we utilize local NVMe storage classes for Prometheus. This ensures that your metrics ingestion doesn't lag, allowing the Horizontal Pod Autoscaler (HPA) to react instantly to traffic spikes. Slow storage means slow auto-scaling.

Step 3: The Code (Node.js 12 Example)

Let's write a function that processes image resizing requests. This is CPU intensive and I/O heavy—a perfect test for our infrastructure.

First, pull the template:

faas-cli template store pull node12

Now, create the handler in handler.js:

"use strict"

const sharp = require('sharp');
const axios = require('axios');

module.exports = async (event, context) => {
    const { imageUrl, width, height } = JSON.parse(event.body);
    
    try {
        const input = (await axios({ url: imageUrl, responseType: "arraybuffer" })).data;
        
        const buffer = await sharp(input)
            .resize(width, height)
            .toBuffer();
            
        return context
            .status(200)
            .headers({ "Content-Type": "image/png" })
            .succeed(buffer.toString('base64'));
            
    } catch (e) {
        return context.status(500).fail(e.message);
    }
}

Deploying this on a standard VPS with spinning rust (HDD) is a nightmare. The npm install process during the build phase takes forever, and the random read/write of processing images slows down the entire OS. With CoolVDS NVMe storage, the I/O wait time is practically zero.

Comparison: Public Cloud vs. Self-Hosted on CoolVDS

Why go through the trouble of managing Kubernetes? It comes down to control and cost predictability.

Feature Public Cloud FaaS OpenFaaS on CoolVDS
Cold Start Latency 200ms - 2000ms < 10ms (Keep-warm capable)
Execution Time Limit Usually 5-15 mins Unlimited
Data Location Regional (e.g., Frankfurt) Local (Oslo, Norway)
Cost Structure Per invocation (Unpredictable) Flat Monthly Rate

The Norwegian Context: Latency and Legality

If your users are in Norway, routing traffic through Sweden or Germany adds unnecessary milliseconds. By hosting on CoolVDS in our Norwegian datacenters, you are peering directly at NIX (Norwegian Internet Exchange). We are talking about single-digit millisecond latency to any ISP in the country.

Furthermore, with GDPR in full effect since last year, data residency is no longer optional for many sectors. Using US-owned public clouds can complicate compliance. Storing your data on Norwegian soil, on servers you administer, simplifies your relationship with Datatilsynet requirements.

Conclusion

Serverless is a powerful paradigm, but it shouldn't cost you your performance budget or your data sovereignty. By layering OpenFaaS on top of robust KVM virtualization, you get the developer velocity of FaaS with the raw power of bare metal.

Don't let shared infrastructure steal your CPU cycles. Deploy your cluster on a platform built for engineers who know the difference between iowait and idle.

Ready to build? Spin up a High-Performance NVMe Instance on CoolVDS today and deploy your first function in under 60 seconds.