Console Login

Serverless Patterns on Iron: Building Event-Driven Systems Without the Public Cloud Hangover

Serverless Patterns on Iron: Building Event-Driven Systems Without the Public Cloud Hangover

Let's get one thing straight immediately: Serverless is a billing model, not a magic wand.

I have spent the last decade debugging distributed systems, and nothing raises my blood pressure faster than a CTO suggesting we move everything to AWS Lambda to "save money." By the time you account for API Gateway costs, NAT Gateway hourly rates, and the engineering hours lost debugging a timeout that only happens on the third Tuesday of the month, you haven't saved a dime. You've just traded infrastructure management for vendor lock-in.

But the architecture pattern behind serverless—event-driven, ephemeral, decoupled functions—is brilliant. The problem is where you run it. For Norwegian businesses dealing with Datatilsynet and the fallout of Schrems II, piping user data through a US-controlled public cloud function is a compliance minefield.

The solution isn't to abandon the pattern. It's to bring it home. We are going to look at running a high-performance Serverless architecture on your own terms, using lightweight Kubernetes (K3s) and OpenFaaS on raw, high-performance KVM instances. No cold starts. No data export worries. Just raw code execution.

The Latency Lie and the Norwegian Reality

If your users are in Oslo and your "serverless" function is spinning up in a data center in Frankfurt (or worse, Ireland), you are fighting physics. Public cloud cold starts can introduce latencies of 200ms to 2 seconds. In the world of high-frequency e-commerce, that is an eternity.

When we run bare-metal or VDS instances locally—connected directly to the NIX (Norwegian Internet Exchange)—we cut that network overhead to single-digit milliseconds. But hardware matters. You cannot run event-driven microservices on a spinner disk. The I/O wait will kill your queue processing speed.

Pro Tip: Always check your disk scheduling. On a CoolVDS NVMe instance, we ensure the host passes through the NVMe controller capabilities efficiently, but inside your Linux guest, you should set the scheduler to none or noop to let the hardware handle it.
echo none > /sys/block/sda/queue/scheduler

The Stack: K3s + OpenFaaS on CoolVDS

Why this stack? Because full Kubernetes is too heavy for a lean DevOps team, and raw Docker lacks the orchestration needed for self-healing functions. K3s is a certified Kubernetes distribution designed for IoT and Edge, but it is absolute perfection for a single robust VDS or a small cluster.

Step 1: The Infrastructure Layer

We start with a clean CoolVDS instance. For a production-grade FaaS (Function as a Service) node, I recommend at least 4 vCPUs and 8GB RAM. Unlike shared hosting where vCPUs are overcommitted, we need guaranteed cycles because functions spike CPU usage instantly.

First, secure the node. We aren't playing games with `iptables` manually; we use `ufw`, but we need to ensure the bridge traffic for containers works.

ufw allow 6443/tcp # K3s API ufw allow 80/tcp ufw allow 443/tcp ufw default deny incoming ufw enable

Step 2: Lightweight Orchestration

Installing K3s avoids the bloat of `etcd` by using SQLite by default (though you can use external DBs). This installs in seconds.

curl -sfL https://get.k3s.io | sh -

Once installed, verify access. If you see a delay here, your underlying hardware is stealing CPU cycles—a common issue with budget VPS providers, but rare on KVM setups that respect resource isolation.

kubectl get nodes

Step 3: Deploying the Serverless Framework

We use OpenFaaS because it is container-native. It doesn't lock you into a proprietary runtime. You can write functions in Go, Python, Node, or even a bash script wrapped in Docker.

We will use `arkade`, the OpenFaaS marketplace installer, to keep things clean.

curl -sLS https://get.arkade.dev | sudo sh arkade install openfaas

The Meat: Handling High-Load Events

Here is where the "pattern" comes in. In a traditional setup, your API accepts a request and processes it synchronously. If the database locks, the user waits. In a Serverless pattern, we accept the request, shove it into a queue (NATS, included with OpenFaaS), and return a 202 Accepted immediately.

The `gateway` service in OpenFaaS handles this. However, the default configuration is often too conservative for high-performance hardware. We need to tune the `queue-worker` to utilize our NVMe throughput effectively.

Here is a deployment configuration for a high-throughput function tailored for image processing (a classic heavy I/O task). Note the resource limits—critical for preventing one function from killing the whole node.

Function Configuration (stack.yml)

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  img-processor:
    lang: go
    handler: ./img-processor
    image: registry.coolvds-client.no/img-processor:latest
    labels:
      com.openfaas.scale.min: 2
      com.openfaas.scale.max: 15
      com.openfaas.scale.factor: 20
    annotations:
      topic: "image-upload"
    limits:
      memory: 256Mi
      cpu: 500m
    requests:
      memory: 64Mi
      cpu: 100m
    environment:
      write_debug: true
      read_timeout: 10s
      write_timeout: 10s

The `com.openfaas.scale.factor` is key. It tells the autoscaler at what percentage of request load to spawn new replicas. On a CoolVDS instance with fast I/O, you can handle higher concurrency per pod, so you might tune this higher than on a sluggish public cloud instance.

The Code: A Golang Event Handler

Let's look at the actual code. We use Go because of its small footprint and goroutines. This handler simulates processing a webhook event without blocking the main thread.

package function

import (
	"encoding/json"
	"fmt"
	"net/http"
	"time"
)

type Request struct {
	Payload string `json:"payload"`
}

// Handle a serverless request
func Handle(w http.ResponseWriter, r *http.Request) {
	var req Request
	if r.Body != nil {
		defer r.Body.Close()
		json.NewDecoder(r.Body).Decode(&req)
	}

	// Simulate processing
	start := time.Now()
	
	// In a real scenario, this would be an image resize or DB transaction
	// The speed of this depends entirely on Single Core performance
	processEvent(req.Payload)

	duration := time.Since(start)
	
	resp := fmt.Sprintf("Processed %s in %d ms on local KVM", req.Payload, duration.Milliseconds())
	w.WriteHeader(http.StatusOK)
	w.Write([]byte(resp))
}

func processEvent(data string) {
    // Dummy CPU load
    for i := 0; i < 1000000; i++ {
        _ = i * i
    }
}

Optimizing the Network Layer

Running Kubernetes on a VDS requires network tuning. The default bridge settings in Linux can be conservative. If you are blasting thousands of events per second, the `conntrack` table will fill up, and packets will drop silently.

Add this to your `/etc/sysctl.conf` to handle the bursty nature of serverless workloads:

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.netfilter.nf_conntrack_max = 131072
net.core.somaxconn = 1024
net.ipv4.tcp_tw_reuse = 1

Apply it with `sysctl -p`. This ensures that when your functions scale down and close connections, those sockets are recycled immediately rather than sitting in `TIME_WAIT`.

Security and GDPR Compliance

This is the deal-breaker for many European companies. If you use AWS Lambda, you are trusting their encryption at rest and their promise that US intelligence agencies aren't peeking. When you host on a Norwegian VPS, you own the disk encryption keys.

We can use LUKS encryption on the underlying partition for data at rest. For data in transit, we terminate SSL at the ingress level using Nginx or Traefik. Here is a robust Nginx configuration block that sits in front of your OpenFaaS gateway, enforcing strict transport security.

server {
    listen 443 ssl http2;
    server_name faas.your-domain.no;

    ssl_certificate /etc/letsencrypt/live/faas/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/faas/privkey.pem;
    
    # Modern SSL configuration for 2023 standards
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Websocket support for async logs
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        
        # Buffer tuning for large payloads
        client_max_body_size 50M;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

The Cost Equation

A typical public cloud function setup with 1 million requests and 100GB of egress can easily cost $50-$100/month once you add the hidden fees. A robust CoolVDS instance with 4 vCPUs and NVMe storage costs a fraction of that, is flat-rate, and won't throttle you when you need it most.

More importantly, it forces you to understand your architecture. You aren't hiding behind an opaque "Serverless" curtain. You are building resilient systems that you control.

Serverless is a powerful pattern. Don't ruin it with bad infrastructure. If you need low latency to Oslo, strict data sovereignty, and predictable performance, stop renting by the millisecond and start owning your stack.

Ready to build a compliant, high-speed event cluster? Deploy a CoolVDS NVMe instance today and get your K3s cluster running in under 5 minutes.