Console Login

Pragmatic Serverless Architecture: Avoiding Vendor Lock-in with Self-Hosted FaaS

Pragmatic Serverless Architecture: Avoiding Vendor Lock-in with Self-Hosted FaaS

The term "Serverless" is the most successful marketing trick of the last decade. It promises infinite scalability and zero management, but for many CTOs in Oslo and Stockholm, it often delivers opaque billing and unacceptable latency spikes. I recall a project last winter where a simple image resizing function on a major US cloud provider—intended to cost pennies—spiraled into a 45,000 NOK monthly bill due to unexpected egress fees and "warm-up" pings. The architecture was sound; the execution platform was the trap.

If you are building for the Nordic market, blindly adopting AWS Lambda or Azure Functions is not always the right move. Between GDPR requirements (Schrems II), data sovereignty concerns, and the fluctuating value of the NOK against the USD, owning your infrastructure while keeping the serverless development pattern is the pragmatic choice. This guide explores how to build a robust, self-hosted serverless architecture using OpenFaaS and K3s on CoolVDS NVMe instances.

The Core Problem: Cold Starts and The "Noisy Neighbor" Effect

In a hyperscaler environment, your code runs on shared hardware with thousands of other tenants. To manage resources, the provider shuts down your function containers when idle. The next request triggers a "cold start," forcing the platform to allocate resources, pull the code, and start the runtime. For a Python or Node.js function, this might take 200ms. For Java or .NET, it can take seconds.

When your users are in Norway, latency matters. A packet traveling from a user in Bergen to a datacenter in Frankfurt already incurs physical latency. Adding 500ms of cold start time kills the user experience. By controlling the underlying metal via a high-performance VPS, you eliminate this variable.

Pro Tip: Keep your datasets strictly local. If your FaaS cluster is in Oslo but your database is in Ireland, you are introducing network I/O that no amount of code optimization can fix. Deploy your database on a CoolVDS instance in the same datacenter as your worker nodes to utilize the local network throughput.

Architecture Pattern: The "Iron-FaaS" Stack

Instead of renting functions, we rent raw compute power and layer a FaaS (Function as a Service) framework on top. This gives us the developer experience of serverless (faas-cli up) with the cost predictability of a VPS.

The Stack:

  • Infrastructure: CoolVDS NVMe Instances (High I/O is critical for container churning).
  • Orchestration: K3s (Lightweight Kubernetes).
  • FaaS Framework: OpenFaaS (Standard, container-native).
  • Ingress: Traefik (Handles routing).

Step 1: The Foundation

First, we need a solid K3s cluster. I prefer K3s over full K8s for this scale because it strips out legacy cloud provider bloat, reducing the memory footprint on the control plane.

# On your primary CoolVDS node (Control Plane)curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --disable traefik# Verify the node statuskubectl get nodes

Why disable the default Traefik? Because we want to install a specifically tuned version later that handles the high-concurrency demands of a FaaS gateway.

Step 2: Deploying OpenFaaS

We use arkade, a tool built by the OpenFaaS community, to handle the helm charts complexities. It saves hours of debugging YAML indentation errors.

# Install arkadecurl -sLS https://get.arkade.dev | sudo sh# Install OpenFaaS with basic autharkade install openfaas --load-balancer

Once installed, you must tune the gateway deployment. The default timeouts are often too conservative for heavy processing tasks (like PDF generation or video transcoding).

# specific-tuning.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: gateway  namespace: openfaasSpec:  template:    spec:      containers:      - name: gateway        env:        - name: read_timeout          value: "60s" # Increased from default        - name: write_timeout          value: "60s"        - name: upstream_timeout          value: "60s"

Pattern: The Asynchronous Worker Queue

Real-world serverless isn't just about HTTP requests; it's about event processing. A common pattern in Nordic e-commerce is handling order confirmations and inventory updates asynchronously to keep the checkout snappy.

In this architecture, NATS (embedded in OpenFaaS) acts as the nervous system. You fire a request, and NATS queues it. Your function processes it when resources allow.

Comparison: Hyperscaler vs. Self-Hosted on CoolVDS

FeatureAWS Lambda / Azure FunctionsOpenFaaS on CoolVDS
Billing ModelPer invocation + duration (Unpredictable)Fixed monthly cost (Predictable)
Execution LimitTypically 15 minutesUnlimited
Data PrivacyUS Cloud Act appliesNorwegian Sovereignty (Data stays here)
Hardware ControlNoneFull (Kernel tuning, NVMe I/O)

Optimizing for NVMe I/O

Serverless functions are ephemeral. They start, write temporary files, and die. This creates massive pressure on the filesystem. This is where standard HDD or cheap SSD VPS providers fail. The "noisy neighbor" effect on a shared disk can cause your function to hang while waiting for I/O.

CoolVDS uses NVMe storage, which provides the IOPS necessary to handle hundreds of concurrent container starts without choking. To further optimize, you should mount a tmpfs (RAM disk) for your function's temporary directories if they don't need persistence.

# docker-compose.yml example for a function serviceversion: "3.8"functions:  image_resizer:    image: registry.coolvds.internal/resizer:latest    environment:      write_debug: "true"    # Mount RAM disk for speed    tmpfs:      - /tmp

Security & Compliance (Datatilsynet)

Running your own FaaS platform puts the security burden on you. However, it also simplifies GDPR compliance. You know exactly where the physical server is running. There is no ambiguous "Serverless Region."

  1. Network Policies: Use Kubernetes NetworkPolicies to isolate the OpenFaaS namespace. Only the gateway should talk to the functions.
  2. Read-Only Root Filesystem: Configure your function containers to run with a read-only root to prevent runtime modification attacks.
  3. Private Registry: Do not pull images from Docker Hub in production. host a private registry on a separate CoolVDS instance or use a secure harbor.

The Final Word on Performance

We recently benchmarked a heavy Node.js 20 workload. On a standard public cloud "Function," the p99 latency was 450ms. On a CoolVDS 4 vCPU / 8GB RAM instance running OpenFaaS, the p99 latency dropped to 85ms. The difference wasn't code—it was the elimination of the virtualization tax and the superior NVMe throughput.

Serverless is a powerful architectural pattern, but it shouldn't dictate your financial or legal destiny. By bringing the architecture in-house on robust infrastructure, you gain the speed of serverless without the baggage of the cloud giants.

Ready to build your Iron-FaaS platform? Don't let slow I/O kill your cold starts. Deploy a high-performance NVMe instance on CoolVDS today and regain control of your infrastructure.