Console Login

Serverless Patterns Without the Lock-in: Implementing FaaS on Nordic Infrastructure

Serverless Patterns on VDS

Serverless Patterns Without the Lock-in: Implementing FaaS on Nordic Infrastructure

Let’s clear the air: "Serverless" is a misnomer. There are always servers. The only variables are who manages them, where they reside, and how much you pay when a recursive loop accidentally triggers a million invocations.

For DevOps teams operating in Norway and the broader EEA, the landscape changed violently in July 2020. The Schrems II ruling by the CJEU effectively invalidated the Privacy Shield. Suddenly, piping user data through US-owned hyperscaler functions (AWS Lambda, Azure Functions) became a legal minefield. If you are handling Norwegian citizen data, reliance on US clouds is no longer just a technical choice; it’s a compliance risk.

But the pattern of Serverless—event-driven code, ephemeral containers, and auto-scaling—remains brilliant. The solution isn't to abandon the architecture. The solution is to own the platform.

This guide breaks down how to implement a sovereign Serverless architecture using OpenFaaS on top of CoolVDS NVMe instances. We get the developer experience of FaaS with the cost-predictability and compliance of a Norwegian VPS.

The Architecture: Why K3s + OpenFaaS?

In 2017, everyone tried to run full Kubernetes clusters on minimal hardware. It was a disaster. Today, in late 2020, we have K3s (a lightweight Kubernetes distribution). It strips out the legacy cloud provider add-ons and runs efficiently on a single VPS with as little as 1GB RAM, though for production FaaS, we recommend at least 4GB.

OpenFaaS sits on top. It gives you the API gateway, the UI, and the function watchdog. You push a Docker container; it handles the scaling.

Step 1: The Infrastructure Layer

Latency matters. If your users are in Oslo, your FaaS endpoints shouldn't be resolving in Frankfurt. CoolVDS instances peer directly at NIX (Norwegian Internet Exchange), dropping round-trip times to single-digit milliseconds. More importantly, FaaS is I/O intensive. Cold starts are essentially disk reads. Spinning up a container requires reading layers from disk to memory instantly.

If you try this on standard HDD or cheap SSD hosting, you will hit I/O wait. CoolVDS uses NVMe. It makes the difference between a 2-second cold start and a 200ms cold start.

Step 2: OS Tuning for High Concurrency

Before installing K3s, you must prep the kernel. Linux defaults are conservative. Serverless workloads generate massive amounts of short-lived network connections.

Open /etc/sysctl.conf and apply these settings to prevent port exhaustion:

# /etc/sysctl.conf
# Increase the maximum number of open file descriptors
fs.file-max = 2097152

# Allow more connections to queue up
net.core.somaxconn = 65535

# Reuse TIME_WAIT sockets (critical for high request volume)
net.ipv4.tcp_tw_reuse = 1

# Increase port range
net.ipv4.ip_local_port_range = 1024 65000

Apply with sysctl -p. Don't skip this. I've seen API gateways choke under load solely because the kernel ran out of ephemeral ports.

Step 3: Deploying the Stack

We assume you have a fresh CoolVDS instance running Ubuntu 20.04 LTS.

1. Install K3s (No Docker required, it uses containerd):

curl -sfL https://get.k3s.io | sh -

# Check status
sudo k3s kubectl get node

2. Install OpenFaaS using arkade (the modern installer for 2020):

# Get arkade
curl -sLS https://dl.get-arkade.dev | sudo sh

# Install OpenFaaS
arkade install openfaas

This installs the core services: Gateway, NATS (messaging), and Prometheus (metrics). Note that NATS is lightweight and perfect for the async queueing required in serverless patterns.

The "War Story": Database Connections in Serverless

Here is where 90% of developers fail. I recently audited a setup for a local e-commerce site. They moved their catalog search to a serverless function. Every time a user searched, the function spun up, opened a connection to MySQL, ran the query, and died.

Under load (Black Friday), they hit the max_connections limit on their database in 3 minutes. The site went dark.

The Fix: When you control the infrastructure (CoolVDS), you can use connection pooling properly. We deployed ProxySQL alongside the database on a private network interface.

If you are running a high-traffic MySQL backend for your functions, check your my.cnf config. The default innodb_buffer_pool_size is often set to a paltry 128MB. On a CoolVDS instance with 8GB RAM, we bump this significantly:

[mysqld]
# 70-80% of available RAM is standard for dedicated DB nodes
innodb_buffer_pool_size = 6G
max_connections = 1000

This allows the buffer pool to cache the hot data, reducing NVMe reads even further. Your functions get data faster, execution time drops, and your system can handle the concurrency.

Function Definition Example

With OpenFaaS, you define behavior in YAML. This is infrastructure-as-code compliant, perfect for GitOps workflows.

stack.yml:

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  image-resizer:
    lang: python3
    handler: ./image-resizer
    image: registry.coolvds-client.no/image-resizer:latest
    labels:
      com.openfaas.scale.min: 2
      com.openfaas.scale.max: 20
    environment:
      write_timeout: 10s
      read_timeout: 10s

Notice the com.openfaas.scale.min: 2. This keeps two containers warm at all times. This eliminates cold starts entirely for the first few concurrent requests. You pay for this capacity on a hyperscaler whether you use it or not. On CoolVDS, you are already paying a flat monthly fee for the VM. Keeping containers warm costs you nothing extra.

Security: The Schrems II Advantage

By hosting this stack on CoolVDS in our Oslo datacenter, you achieve:

  • Data Residency: The data never leaves Norway.
  • Encryption: Use WireGuard (available in Linux Kernel 5.6+, standard in Ubuntu 20.04) to create an encrypted mesh between your FaaS node and your database node.
  • DDoS Protection: Our network edge filters volumetric attacks before they hit your K3s ingress.
Pro Tip: Don't expose the OpenFaaS gateway directly to the internet. Put an Nginx reverse proxy in front of it with basic auth or mTLS, and use ufw to block all ports except 80, 443, and your WireGuard port.

Conclusion

The flexibility of Serverless is undeniable. But in late 2020, the risks of vendor lock-in and data privacy are too high to ignore. You don't need AWS to build event-driven systems.

You need a solid kernel, fast NVMe storage, and a pipe to the internet that doesn't route through Virginia. By layering OpenFaaS on CoolVDS, you get the best of both worlds: the agility of FaaS and the stability of a dedicated virtual server.

Ready to build your sovereign cloud? Deploy a high-memory CoolVDS instance today and get K3s running in under 5 minutes.