Console Login

Serverless Without the Cloud Tax: Self-Hosted FaaS Patterns for Nordic Ops

Serverless Without the Cloud Tax: Self-Hosted FaaS Patterns for Nordic Ops

Let’s clear the air: Serverless is a billing model, not just an architecture.

In the last two years, I’ve watched too many CTOs in Oslo migrate perfectly good monoliths to AWS Lambda or Azure Functions, only to realize that their "pay-per-use" model actually means "pay-double-for-compute." While the Function-as-a-Service (FaaS) paradigm—event-driven, ephemeral containers—is brilliant for developer velocity, the public cloud implementation is often a trap. You trade infrastructure management for vendor lock-in, cold starts, and data sovereignty headaches under the US CLOUD Act.

There is a better way. By 2020 standards, the container ecosystem has matured enough that we can build Private FaaS platforms. We get the "git push -> deploy" workflow developers love, but we run it on cost-efficient, high-performance Virtual Dedicated Servers (VDS) right here in Norway. No latency trips to Frankfurt. No unpredictable billing spikes.

The Latency Lie: Why "Region: EU-North" Isn't Enough

If your users are in Norway, routing traffic through a hyperscaler’s data center in Stockholm or Ireland introduces unnecessary network hops. But the real killer is the Cold Start. When a Lambda function hasn't run in a few minutes, the provider must spin up a micro-container. In our benchmarks involving a standard Node.js 12 API, this can add 200ms to 600ms of latency.

For a background worker processing images, that’s fine. For a synchronous e-commerce checkout API? It’s a conversion killer.

By running a persistent FaaS layer on a CoolVDS instance, you control the idle timeouts. You can keep your containers "warm" indefinitely because you aren't paying per 100ms of execution—you're paying a flat rate for the dedicated NVMe resources.

Architecture Pattern: The "Iron Functions" Stack

We are going to deploy a self-hosted serverless stack using OpenFaaS on top of K3s (a lightweight Kubernetes distribution). This setup fits comfortably on a CoolVDS instance with 4GB RAM and 2 vCPUs, though for production I recommend 8GB to handle the burst.

Step 1: The Foundation

First, we need a clean Linux environment. We prefer Ubuntu 18.04 LTS for its kernel stability with container overlays. On your CoolVDS instance, ensure you have disabled swap to keep Kubernetes happy, although K3s is forgiving.

# purely optional: check your I/O speed before we start. # CoolVDS NVMe drives usually hit 1.2GB/s+ read speeds here. dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync # Install K3s (Lightweight Kubernetes) curl -sfL https://get.k3s.io | sh - # Check the node status sudo k3s kubectl get node

Step 2: Deploying the Serverless Framework

We use arkade (formerly k3sup tools) to install OpenFaaS quickly. This tool abstracts the Helm charts complexity.

# Install arkade curl -SLfs https://dl.get-arkade.dev | sudo sh # Deploy OpenFaaS to the k3s cluster arkade install openfaas # Retrieve your admin password kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo

At this stage, you have a fully functional FaaS platform. The Gateway runs on port 8080. Unlike AWS, you own the timeout settings. You can configure a function to run for 60 minutes if you want (unlike the hard 15-minute limit on Lambda).

Configuring for Production: The Nginx Gateway

Exposing the OpenFaaS gateway directly is reckless. We need Nginx to handle SSL termination and rate limiting. This is where the "Managed Hosting" mindset meets DevOps.

Edit your /etc/nginx/sites-available/faas:

server { listen 80; server_name faas.your-domain.no; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # Performance Tuning for Long-Running Functions proxy_read_timeout 300s; proxy_connect_timeout 300s; # Buffer settings for heavy JSON payloads proxy_buffers 16 16k; proxy_buffer_size 32k; } }
Pro Tip: If you are processing heavy data loads (like image resizing), the standard `proxy_buffers` are insufficient. Bumping them to `16 16k` prevents Nginx from writing temporary files to disk, keeping everything in the RAM of your CoolVDS instance for maximum throughput.

The Developer Workflow: Function Definition

Developers don't need to know about the K3s layer. They just see a CLI. Here is how we define a simple Node.js function in a `stack.yml` file. This feels exactly like `serverless.yml` but without the AWS specific bloat.

provider: name: openfaas gateway: https://faas.your-domain.no functions: order-processor: lang: node12 handler: ./order-processor image: registry.gitlab.com/your-org/order-processor:latest environment: write_debug: true read_timeout: 60s # Security context prevents root escalation annotations: com.openfaas.security.http.enable: "true"

Data Sovereignty and The "Datatilsynet" Factor

Operating in Norway requires strict adherence to GDPR. While Privacy Shield is currently in place, the legal ground is shifting. By hosting your FaaS infrastructure on a Norwegian VPS provider like CoolVDS, you ensure that the physical disks processing your customer data are located within Norwegian borders.

Unlike public cloud functions where execution location can sometimes be opaque (or failover to a different region), a VDS gives you certainty. You know exactly which datacenter your code is executing in.

Performance: NVMe vs. The World

The hidden bottleneck in serverless is often I/O. When a function wakes up, it often needs to read config files, load libraries, or process a temp file. Public cloud functions often run on network-attached storage with variable IOPS.

Metric Public Cloud FaaS (Standard) CoolVDS (OpenFaaS)
Cold Start 250ms - 1000ms < 50ms (Tunable)
Disk I/O Variable (Network) Dedicated NVMe
Execution Limit 15 Minutes Unlimited
Cost per 1M Reqs $0.20 - $0.60 + Compute $0 (Included in VDS flat rate)

Conclusion

Serverless architecture is here to stay, but you don't need to rent it at premium rates. By leveraging modern tools like K3s and OpenFaaS on robust, low-latency infrastructure, you can build a system that is faster, cheaper, and legally safer.

If you are ready to build a FaaS platform that you actually own, you need a foundation that won't buckle under the load. Deploy a CoolVDS high-frequency instance today and see what dedicated NVMe performance does for your function execution times.