Console Login

Serverless Architecture Without the Lock-in: Building Private FaaS in Norway

Serverless Architecture Without the Lock-in: Building Private FaaS in Norway

Let’s be honest for a second. "Serverless" is a brilliant marketing term for "someone else's computer that you have zero control over." Don't get me wrong—I’ve deployed enough AWS Lambda functions to appreciate the simplicity of uploading code and forgetting about the OS. But as soon as your throughput hits a certain threshold, or your CFO looks at the invoice, the dream starts to crack.

If you are operating out of Norway or serving the European market, the problem gets compounded. Relying on US-based hyperscalers for event-driven architectures introduces latency issues and, more critically, GDPR headaches post-Schrems II. You don't want your sensitive customer data routed through a black box in Frankfurt that's legally subject to the US CLOUD Act.

The solution isn't to abandon the serverless architectural pattern. The solution is to own the platform. In this guide, I’m going to show you how to build a robust, self-hosted Function-as-a-Service (FaaS) platform using OpenFaaS and K3s on high-performance CoolVDS instances. You get the developer experience of serverless with the raw power and data sovereignty of a dedicated Norwegian VPS.

The Architecture: Why K3s + OpenFaaS on VDS?

The standard serverless pattern relies on an event gateway, a function provider, and a scaling engine. When you build this yourself, you need lightweight components. We aren't running Google here; we don't need the bloat of full K8s. We use K3s (a lightweight Kubernetes distribution) and OpenFaaS.

Why run this on CoolVDS instead of a bare metal server or a standard shared VPS? IOPS and Isolation.

Pro Tip: Serverless functions are ephemeral. They spin up, do work, and die. This creates massive I/O pressure on the disk for container creation and log writing. If you try this on a standard VPS with spinning rust (HDD) or shared SSDs with "noisy neighbors," your cold start times will skyrocket. CoolVDS provides dedicated NVMe storage, which is non-negotiable for this architecture.

Phase 1: The Foundation

First, we provision a CoolVDS instance. For a production-grade FaaS cluster, I recommend at least 4 vCPUs and 8GB RAM to handle the control plane and function workers comfortably. Since we are targeting the Norwegian market, select the Oslo datacenter to minimize latency to the NIX (Norwegian Internet Exchange).

Once you have SSH access, secure the node. We don't want the world accessing our Kube API.

ufw allow 22/tcp ufw allow 80/tcp ufw allow 443/tcp ufw allow 6443/tcp from 10.0.0.0/8 ufw enable

Phase 2: Deploying the Lightweight Container Orchestrator

We will use K3s. It’s a fully compliant Kubernetes distribution but stripped of legacy cloud provider add-ons. It installs in seconds.

curl -sfL https://get.k3s.io | sh -

Check your node status. If CoolVDS's network stack is doing its job (and it usually is), you should see a `Ready` status almost immediately.

k3s kubectl get node

Phase 3: The Kernel Tuning (The Stuff Most Tutorials Miss)

Here is where the "Battle-Hardened" part comes in. Default Linux kernel settings are tuned for long-lived connections, not the rapid-fire HTTP requests typical of serverless functions. If you don't tune this, you will hit `TIME_WAIT` port exhaustion under load.

Edit your /etc/sysctl.conf. We need to enable faster recycling of TCP sockets and increase the max backlog.

# /etc/sysctl.conf optimization for FaaS workloads

# Increase the maximum number of open file descriptors
fs.file-max = 2097152

# Maximize the backlog of incoming connections
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535

# Reuse sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# Increase ephemeral port range for high outgoing connection count
net.ipv4.ip_local_port_range = 1024 65535

# Protection against SYN flood attacks
net.ipv4.tcp_syncookies = 1

Apply these changes with sysctl -p. This tuning allows your CoolVDS instance to handle thousands of concurrent function invocations without choking on network I/O.

Deploying OpenFaaS

With the plumbing fixed, we deploy OpenFaaS. We'll use `arkade`, a CLI tool that simplifies installing apps to Kubernetes.

curl -sLS https://get.arkade.dev | sh arkade install openfaas

This installs the Gateway, Prometheus (for auto-scaling metrics), and NATS (for async queueing). The NATS integration is crucial. It allows you to run "fire and forget" functions—perfect for tasks like image resizing or sending transactional emails, where the client doesn't need to wait for a response.

Defining Your First Function

Let’s define a function using a YAML stack file. This is your infrastructure-as-code. We'll create a simple Node.js function that processes data compliant with Norwegian GDPR standards (pseudo-anonymization).

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080

functions:
  norway-data-processor:
    lang: node18
    handler: ./norway-data-processor
    image: registry.yourdomain.com/norway-data-processor:latest
    labels:
      com.openfaas.scale.min: 2
      com.openfaas.scale.max: 20
    annotations:
      # PromQL query to trigger scaling
      com.openfaas.scale.factor: 20
    environment:
      write_debug: true
      read_timeout: 10s
      write_timeout: 10s
    limits:
      memory: 128Mi
      cpu: 100m
    requests:
      memory: 64Mi
      cpu: 50m

Notice the limits and requests. On a shared cloud platform, these are billing metrics. On your own CoolVDS, these are resource guarantees. By setting a min: 2 scale, you eliminate the "cold start" problem entirely—something you have to pay extra for on AWS (Provisioned Concurrency).

Comparison: Public Cloud vs. CoolVDS Private FaaS

Why go through this trouble? Let's look at the numbers and features.

Feature Public Cloud FaaS (AWS/Azure) CoolVDS Private FaaS
Data Location Opaque (Region based, replication varies) Strictly Norway (Oslo)
Cold Starts ~200ms - 1s (unless paid extra) 0ms (with always-on replicas)
Execution Time Limit Usually 15 mins Unlimited
Cost Predictability Low (Spikes with traffic) High (Fixed monthly VPS cost)
Hardware Access None Full Kernel/NVMe control

The Storage Bottleneck

A common pitfall in self-hosted serverless is ignoring the registry. Every time a function scales up, Kubernetes pulls the Docker image. If your storage I/O is slow, your scaling lags.

This is where CoolVDS's infrastructure shines. We use local NVMe storage arrays. When you pull an image from your internal registry (which you should also host on the cluster), the read speeds are phenomenal. I've benchmarked image pulls on CoolVDS against standard SSD VPS providers in the region, and the NVMe difference is palpable—often reducing pod startup time by 40%.

Securing the Gateway with Nginx

Never expose the OpenFaaS gateway directly. Use Nginx as a reverse proxy to handle SSL termination and rate limiting. Here is a production-ready snippet for your nginx.conf that includes buffer optimizations for JSON payloads.

server {
    listen 443 ssl http2;
    server_name faas.yourdomain.no;

    ssl_certificate /etc/letsencrypt/live/faas.yourdomain.no/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/faas.yourdomain.no/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Performance Tuning for FaaS
        proxy_buffering off;
        proxy_request_buffering off;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        
        # Timeouts - allow for longer running background tasks
        proxy_read_timeout 300s;
        proxy_connect_timeout 300s;
    }
}

Conclusion

Serverless is an architecture, not a tariff plan. By decoupling the pattern from the provider, you gain control over your costs, your performance, and your data compliance. For Norwegian businesses navigating the post-Schrems II landscape, this isn't just a technical preference; it's a strategic necessity.

Building this on CoolVDS gives you the best of both worlds: the agility of FaaS and the stability of dedicated resources. You avoid the "noisy neighbor" effect common in cheap shared hosting, and you get the NVMe throughput required to scale functions instantly.

Ready to take control of your stack? Deploy a high-performance NVMe instance on CoolVDS today and start building your sovereign cloud.