The "No-Ops" Lie and the Reality of European Data Sovereignty
Let’s clear the air immediately. There is no such thing as "Serverless." There are simply other people's servers. And when those servers belong to US-based hyperscalers, you inherit their latency, their opaque pricing models, and—most critically for us operating in Norway—their legal baggage regarding GDPR and Schrems II.
I have spent the last decade architecting distributed systems across the Nordics. The allure of AWS Lambda or Azure Functions is obvious: git push and go. But for serious engineering teams, the trade-offs are becoming unacceptable. Cold starts of 500ms+ kill user experience. Vendor lock-in turns migration into a six-month nightmare. And the Datatilsynet (Norwegian Data Protection Authority) is watching cross-border data transfers like a hawk.
The solution isn't to abandon the event-driven pattern. It's to own it. By deploying a Private FaaS (Function-as-a-Service) architecture on high-performance Virtual Dedicated Servers (VDS), we gain the developer velocity of serverless with the raw I/O performance of local NVMe storage.
The Stack: Kubernetes, OpenFaaS, and NATS
In late 2022, the most robust stack for a private serverless implementation relies on Kubernetes (specifically lighter distributions like K3s for VDS environments) paired with OpenFaaS. This setup gives you the "scale-to-zero" capability without sending your data to a data center in Frankfurt or Virginia.
But software implies hardware. Kubernetes control planes are notoriously sensitive to disk latency. If you run etcd on standard HDD or shared SATA SSDs, your cluster will flap. I've seen it happen. The consensus algorithm (Raft) times out, and the API server crashes. This is why we benchmark CoolVDS instances against standard cloud VPS offerings—our NVMe arrays deliver the IOPS required to keep etcd happy.
Phase 1: The Infrastructure Layer
Don't install a bloated K8s distribution. Use K3s. It strips out legacy cloud provider binaries and consumes roughly 500MB of RAM. On a CoolVDS instance with 4 vCPUs, this leaves ample room for your actual functions.
# Initialize the cluster with flannels backend disabled if you plan to use Cilium
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable-network-policy --no-deploy traefik" sh -Once the node is ready, check your I/O wait times. A high iowait means your provider is stealing cycles or overselling storage.
# Check disk latency profile
ioping -c 10 -D /var/lib/rancher/k3s/agent/If you see latency spiking above 5ms on a "high performance" VPS, move providers. Your functions won't scale if the underlying container runtime is waiting on disk.
The Event Bus: Replacing SQS with NATS JetStream
A serverless architecture is useless without a reliable event bus. In the cloud, you'd use SQS or EventBridge. On your own infrastructure, NATS JetStream is the superior choice. It is written in Go, compiles to a single binary, and handles millions of messages per second with minimal CPU overhead.
Here is a production-ready configuration for NATS that ensures persistence (writing to that fast NVMe layer) so you don't lose events if a pod crashes:
# nats-server.conf
server_name: nats-norway-01
jetstream {
store_dir: /data/jetstream
max_memory_store: 1G
max_file_store: 10G
}
cluster {
name: nats-cluster
listen: 0.0.0.0:6222
routes: [
nats://10.10.10.2:6222
]
}Pro Tip: Always map your store_dir to a dedicated volume or partition. If NATS fills up the root partition, it can crash the OS, taking your SSH access with it.Deploying the Function Watchdog
OpenFaaS uses a component called the "Watchdog" to proxy requests to your functions. It handles the stdio serialization, allowing you to write functions in any language—even Bash. This effectively turns any binary into a scalable microservice.
Here is how you define a function stack that processes image resizing events—a classic resource-heavy task that gets expensive on public cloud FaaS due to execution time billing.
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
image-resizer:
lang: python3-http
handler: ./image-resizer
image: registry.coolvds.com/image-resizer:0.4.2
labels:
com.openfaas.scale.min: 1
com.openfaas.scale.max: 20
annotations:
topic: image.created
environment:
write_debug: true
read_timeout: 30s
write_timeout: 30sNote the com.openfaas.scale.min: 1 label. Unlike AWS Lambda, where keeping a function "warm" costs extra (Provisioned Concurrency), on your own VDS, the marginal cost of a sleeping container is near zero. This eliminates the dreaded "cold start" latency for your critical paths.
The Norwegian Context: Latency and Law
Why go through this trouble? Two reasons: Physics and Lawyers.
1. The Speed of Light
If your users are in Oslo, Bergen, or Trondheim, routing traffic to a data center in Ireland or Stockholm adds unnecessary milliseconds. Round-trip time (RTT) matters. By hosting on CoolVDS infrastructure located directly in Norway, peering via NIX (Norwegian Internet Exchange), you are physically closer to your customers. Your API feels snappier because it is snappier.
2. Schrems II and GDPR
Since the Schrems II ruling, transferring personal data to US-controlled clouds (even their EU regions) is legally fraught. The CLOUD Act allows US agencies to subpoena data stored by US companies anywhere in the world. Hosting on a Norwegian VDS provider like CoolVDS, which falls strictly under Norwegian and EEA jurisdiction, simplifies your compliance architecture significantly. You know exactly where the bits live.
Performance Tuning: The Kernel Level
Running high-density containers requires kernel tuning. Default Linux settings are conservative. For a FaaS environment, you need to handle bursty network traffic.
Add this to your /etc/sysctl.conf:
# Allow more connections to complete
net.core.somaxconn = 4096
# Reuse connections in TIME_WAIT state
net.ipv4.tcp_tw_reuse = 1
# Increase range of local ports for high throughput
net.ipv4.ip_local_port_range = 1024 65535
# Increase max open files for high concurrency
fs.file-max = 2097152Apply with sysctl -p. These settings allow your NATS bus and OpenFaaS gateway to handle thousands of concurrent invocations without hitting file descriptor limits.
Conclusion: Take Back Control
Serverless is powerful. But "Serverless" should describe your developer experience, not your lack of control over infrastructure. By combining the OpenFaaS pattern with the raw power of NVMe-backed CoolVDS instances, you get the best of both worlds: the agility of event-driven code and the stability of dedicated hardware.
Stop renting execution time by the millisecond. Build a platform that you own.
Ready to build your private cloud? Deploy a high-performance NVMe VDS in Norway today and start your K3s cluster in under 60 seconds with CoolVDS.