Console Login

Serverless Architectures without the US Cloud Tax: Building Private FaaS in Norway

Serverless Architectures without the US Cloud Tax: Building Private FaaS in Norway

There is a dangerous misconception in our industry that "Serverless" equates exclusively to AWS Lambda or Azure Functions. While these public cloud offerings popularized the event-driven paradigm, they introduce two critical risks for European businesses: unpredictable billing scalability (the "wallet denial of service") and, more critically, data sovereignty issues under Schrems II.

As a CTO, I have reviewed too many architecture diagrams where simple cron jobs were transformed into complex Lambda chains, routing sensitive Norwegian user data through US-owned data centers. It works until the bill arrives, or until the Datatilsynet (Norwegian Data Protection Authority) asks where the encryption keys are stored.

The pragmatic alternative in 2023 is the Private FaaS (Function as a Service) pattern. By deploying a lightweight serverless framework on high-performance Virtual Dedicated Servers (VDS) within Norway, we gain the developer velocity of serverless with the cost predictability and compliance of bare-metal control.

The Architecture: Private FaaS on Kubernetes

The most robust implementation of this pattern today involves running OpenFaaS on top of a lightweight Kubernetes distribution like K3s. This stack allows you to define functions in Docker containers that scale to zero when idle and scale up instantly under load, utilizing the raw compute power of the underlying VPS.

Why Infrastructure Matters: The etcd Bottleneck

Before we look at the code, we must address the hardware. Kubernetes is notoriously sensitive to disk latency. Its state store, etcd, requires low fsync latency to maintain cluster quorum. On standard VPS providers offering HDD or shared SATA SSDs, I have seen K3s clusters crash under load because the disk couldn't keep up with state changes.

This is where the "CoolVDS Factor" becomes an architectural requirement rather than a sales pitch. We utilize CoolVDS NVMe instances exclusively for these workloads. The NVMe interface provides the high IOPS necessary to keep the Kubernetes control plane stable during bursty serverless scaling events.

Implementation: Deploying the Stack

Let's assume you have provisioned a CoolVDS instance running Ubuntu 22.04 LTS. Our goal is to get a serverless gateway running in under 10 minutes.

1. The Foundation: K3s

First, we install K3s. We disable the default Traefik ingress controller because we want finer control over our gateway traffic later.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -

# Verify the node is ready (usually takes 30 seconds on NVMe storage)
sudo k3s kubectl get node

2. The Serverless Framework: OpenFaaS

We will use arkade, a widely accepted CLI tool in 2023 for managing Kubernetes apps, to install OpenFaaS.

# Install arkade
curl -sLS https://get.arkade.dev | sudo sh

# Install OpenFaaS with basic auth enabled
arkade install openfaas --load-balancer

# Check the deployment status
sudo k3s kubectl get pods -n openfaas
Pro Tip: If you are serving users in Oslo, keep your VDS in a local data center. The round-trip time (RTT) from Oslo to a Frankfurt data center is ~25-30ms. From Oslo to a Norwegian data center, it is <5ms. For an API gateway aggregating multiple function calls, that latency compounds quickly.

Pattern: The Asynchronous Worker

One of the most powerful serverless patterns is offloading heavy processing (e.g., PDF generation, image resizing) to a background queue. In a public cloud, you might wire SNS to Lambda. On our Private FaaS stack, we use NATS (bundled with OpenFaaS) to handle the queueing automatically.

Here is a definition for a function that processes data asynchronously. We define this in a stack.yml file:

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  invoice-processor:
    lang: node18
    handler: ./invoice-processor
    image: registry.coolvds-client.no/invoice-processor:latest
    labels:
      com.openfaas.scale.min: 0
      com.openfaas.scale.max: 15
    annotations:
      # This is the key: asynchronous invocation topic
      topic: output.invoice.created

By setting the topic annotation, any event pushed to NATS on that topic will trigger this function. The system handles the retry logic and backpressure for you.

The Handler Code (Node.js 18)

The beauty of this architecture is simplicity. Your developers do not need to learn Kubernetes manifests. They write standard code.

'use strict'

module.exports = async (event, context) => {
  const payload = event.body;
  
  // Simulate heavy processing
  console.log(`Processing invoice ID: ${payload.id}`);
  
  // In a real scenario, we might store the result in a local DB
  // Low latency to the DB is critical here.
  
  return context
    .status(200)
    .succeed({ status: "processed", timestamp: new Date() });
}

Security & Compliance: The "Schrems II" Advantage

When you deploy this architecture on CoolVDS, you control the entire stack down to the OS level. There is no opaque hypervisor managed by a US entity. For Norwegian businesses dealing with health data or financial records, this distinction is paramount for compliance.

To secure your gateway, strict firewall rules are mandatory. Do not rely solely on the application layer.

# UFW configuration for a hardened VDS
sudo ufw default deny incoming
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Only allow access to the OpenFaaS gateway UI from your admin VPN IP
sudo ufw allow from 192.168.10.5 to any port 8080
sudo ufw enable

Cost Comparison: The TCO Reality

Let's look at the numbers. A high-traffic API on AWS API Gateway + Lambda + NAT Gateway (the hidden cost killer) can easily exceed $500/month for a medium-sized startup.

Feature Public Cloud FaaS CoolVDS Private FaaS
Compute Cost Per request + GB-second (Unpredictable) Fixed Monthly (Predictable)
Data Egress Expensive ($0.09/GB+) Generous / Included
Cold Starts Vendor controlled Tunable (Keep-alive settings)
Data Residency Complex (US Cloud Act risks) 100% Norway

Conclusion

Serverless is a powerful architectural pattern, but it shouldn't cost you your budget predictability or your compliance posture. By leveraging modern tools like K3s and OpenFaaS on robust infrastructure, you get the best of both worlds: the developer experience of FaaS and the control of a dedicated server.

However, remember that this stack demands I/O performance. Running Kubernetes on legacy storage is a recipe for instability.

Ready to build your private serverless cloud? Deploy a CoolVDS NVMe instance today and see the difference low latency makes for your API response times.