Serverless Without the Vendor Lock-in: Building High-Performance Event Architectures in Norway
Let’s cut through the marketing noise. "Serverless" is a lie. There are always servers. The only difference is whether you control them, or if you're renting them by the millisecond at a 400% markup from a hyperscaler in Frankfurt. I’ve spent the last decade debugging distributed systems, and I’ve seen the bill shock that hits CTOs when they realize their "cheap" Lambda functions are suddenly costing more than a rack of dedicated metal.
If you are building for the Norwegian market, relying on public cloud functions introduces two massive headaches: latency and sovereignty. A round trip from a user in Trondheim to a data center in Ireland and back involves physics you can't optimize away. Furthermore, with the Norwegian Data Protection Authority (Datatilsynet) scrutinizing data transfers post-Schrems II, keeping your compute closer to home isn't just about speed; it's about survival.
Here is the battle-tested pattern I use: Self-Hosted Serverless using K3s and OpenFaaS on high-performance KVM instances. It gives you the developer experience of FaaS (Functions as a Service) with the predictability of a VPS.
The Architecture: Why KVM Matters
You cannot build a stable orchestration layer on top of garbage infrastructure. If you try to run Kubernetes (even the lightweight K3s) on an oversold OpenVZ container, you will hit resource limits immediately. The kernel controls are too restrictive. You need hardware virtualization.
This is where I typically deploy CoolVDS instances. Their KVM stack ensures that when my orchestrator asks for CPU cycles, it actually gets them. No "noisy neighbors" stealing my I/O operations. When you are processing webhooks asynchronously, disk I/O wait times are the silent killer of throughput.
Step 1: The Foundation (Lightweight Kubernetes)
We don't need the bloat of full K8s. We need K3s. It’s a certified Kubernetes distribution built for IoT and Edge computing, but it’s perfect for single-node VPS deployments in Oslo.
Log into your CoolVDS instance (I recommend at least 2 vCPUs and 4GB RAM for this pattern) and nuke the iptables to start fresh, then install K3s:
# Prepare the environment
sudo apt-get update && sudo apt-get install -y curl
# Install K3s (disable Traefik if you prefer Nginx, but for this we keep it)
curl -sfL https://get.k3s.io | sh -
# Verify the node is ready (usually takes about 20 seconds on NVMe storage)
sudo k3s kubectl get node
If you see your node status as Ready, you have a functional cluster. On a CoolVDS NVMe plan, this installation typically completes in under 30 seconds due to the high disk write speeds.
Step 2: Deploying the Function Engine
We will use OpenFaaS. It’s container-centric, meaning any Docker container can be a "function." This removes the runtime limitations you find on AWS or Azure. To install it, we use arkade, a CLI tool that simplifies Helm charts.
# Get arkade
curl -sLS https://get.arkade.dev | sudo sh
# Install OpenFaaS on K3s
arkade install openfaas
# Check the rollout status
sudo k3s kubectl rollout status -n openfaas deploy/gateway
Pro Tip: Always expose your gateway behind a secure reverse proxy. Do not open port 8080 directly to the world. Use the CoolVDS firewall or `ufw` to restrict access to the gateway administration endpoints to your management IP only.
Step 3: Building a High-Performance Function
Let’s create a function that handles image resizing—a common task that usually costs a fortune in cloud egress fees. With a VPS in Norway, you often have generous bandwidth allowances (CoolVDS offers unmetered traffic on many plans), making this pattern economically superior.
First, install the CLI:
curl -sL https://cli.openfaas.com | sudo sh
Now, generate a Python function skeleton:
faas-cli new --lang python3-http image-resizer
This creates a handler.py. Let's make it do actual work. We'll use Pillow for image processing. Ensure you add `Pillow` to your `requirements.txt`.
import io
from PIL import Image
def handle(event):
if event.method != "POST":
return {"statusCode": 405, "body": "Method not allowed"}
try:
# Assume input is binary image data
image_data = event.body
img = Image.open(io.BytesIO(image_data))
# Resize logic
img = img.resize((128, 128))
# Save to buffer
buf = io.BytesIO()
img.save(buf, format="JPEG")
byte_im = buf.getvalue()
return {
"statusCode": 200,
"headers": {"Content-Type": "image/jpeg"},
"body": byte_im
}
except Exception as e:
return {"statusCode": 500, "body": str(e)}
Step 4: Tuning for Throughput
The default configurations are rarely sufficient for production. You need to tune the autoscaler. In your `stack.yml`, add labels to control how the function scales based on CPU load.
functions:
image-resizer:
lang: python3-http
handler: ./image-resizer
image: docker.io/myrepo/image-resizer:latest
labels:
com.openfaas.scale.factor: 20
com.openfaas.scale.min: 1
com.openfaas.scale.max: 15
annotations:
com.openfaas.health.http.initialDelay: "5s"
The NVMe Advantage
When functions scale up (Cold Start), the container image must be pulled from the registry and extracted to disk. This is where standard HDD VPS hosting fails. Extracting layers is I/O intensive. On CoolVDS NVMe storage, I’ve benchmarked container start times at 300-400ms, compared to 2-3 seconds on standard SSD or spinning rust. That difference is perceptible to your users.
Comparison: Managed Cloud vs. Self-Hosted on CoolVDS
| Feature | Hyperscaler (AWS/Azure) | Self-Hosted (CoolVDS) |
|---|---|---|
| Data Location | Frankfurt / Stockholm | Oslo (Local) |
| Cost Predictability | Variable (Pay per invocation) | Fixed (Monthly Flat Rate) |
| Execution Limit | 15 minutes max | Unlimited |
| Cold Start | Unpredictable | Tunable / Zero (if min=1) |
Security and Compliance (The Norwegian Context)
Running this architecture allows you to enforce strict firewall rules. Unlike a public API gateway where you rely on IAM roles, here you control the network interface. Use `iptables` to lock down the cluster traffic.
# Allow K3s API only from your VPN/Static IP
iptables -A INPUT -p tcp --dport 6443 -s 192.168.1.50 -j ACCEPT
iptables -A INPUT -p tcp --dport 6443 -j DROP
Furthermore, storing temporary data on ephemeral volumes within Norway simplifies your GDPR compliance posture. You aren't inadvertently replicating PII to a backup bucket in the US East region.
Conclusion
Serverless isn't about getting rid of servers; it's about abstracting them intelligently. By owning the infrastructure layer with a high-performance provider like CoolVDS, you gain the benefits of event-driven architecture without the latency penalties or legal grey areas of the public cloud.
Don't let slow I/O kill your application's responsiveness. Your code deserves better than a shared CPU slice.
Ready to build? Deploy a high-frequency NVMe instance on CoolVDS today and get your cluster running in under 55 seconds.