Serverless Without the Lock-in: Architecture Patterns for the Pragmatic Norwegian CTO
Letâs clear the air. "Serverless" is the most misleading term in our industry since "The Cloud." There are always servers. The only variable is who manages them, how much they charge you when your code goes into an infinite loop, andâcrucially for us operating in the EEAâwho has legal access to the physical drives.
It is October 2022. The hype cycle for FaaS (Functions as a Service) has settled. We know the benefits: granular scaling and reduced operational overhead. We also know the pain points: cold starts, vendor lock-in, and the unpredictable billing model that can turn a $50 project into a $5,000 nightmare overnight.
For a Norwegian business, there is a third, darker problem: Schrems II. If you are piping personal data through a US-managed hyper-scaler's black-box function, are you compliant? Datatilsynet (The Norwegian Data Protection Authority) has been increasingly vocal about data transfers.
This article isn't about how to write a "Hello World" function. It is about architectural patterns that work in production, and how to implement them without surrendering your infrastructure autonomy. Sometimes, the best serverless platform is the one you host yourself on bare-metal performance.
Pattern 1: The Asynchronous Decoupler (Queue-Based Leveling)
The most common mistake I see is treating FaaS as a direct replacement for HTTP endpoints in a synchronous flow. You have a frontend waiting for a backend, which triggers a function, which queries a database. If that function hits a cold start (often 200ms - 2s), your user bounces.
The Fix: Decouple the ingestion from the processing.
In this pattern, your API Gateway accepts the request, pushes it to a queue (like NATS or RabbitMQ), and immediately returns a 202 Accepted. The function worker picks up the job asynchronously.
Implementation Strategy
If you are running on CoolVDS, you don't need expensive managed queues. You can deploy a lightweight NATS Streaming server. It consumes minimal resources but handles thousands of messages per second.
# Deploying NATS on a standard Docker host
docker run -d --name nats-streaming \
-p 4222:4222 -p 8222:8222 \
nats-streaming:0.24.6 \
-store file -dir /datastore -m 8222
By controlling the queue infrastructure on a VPS with low-latency NVMe storage, you eliminate the network hop latency often found in distributed managed cloud services.
Pattern 2: The "Strangler Fig" for Monolith Migration
You have a legacy PHP or Java monolith. It's slow, it's heavy, but it makes money. Rewriting it entirely is suicide. Instead, use the Strangler Fig pattern. You place an API Gateway in front of the monolith and gradually route specific routes to serverless functions.
The War Story: In early 2022, we helped a logistics firm in Oslo migrate their tracking system. The core ERP remained on their legacy stack, but the "Track Package" endpointâwhich received 90% of the trafficâwas moved to a function.
We used Nginx as the traffic director. This is a configuration many forget is possible. You don't need a cloud load balancer; you need a solid nginx.conf.
http {
upstream legacy_backend {
server 10.0.0.5:8080;
}
upstream faas_gateway {
server 127.0.0.1:8080; # OpenFaaS Gateway
}
server {
listen 80;
server_name api.logistics-norway.no;
# Route old traffic to monolith
location / {
proxy_pass http://legacy_backend;
}
# Strangler pattern: Intercept tracking requests
location /api/v1/track {
proxy_pass http://faas_gateway/function/track-package;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
Pattern 3: The Sovereign FaaS (Self-Hosted OpenFaaS)
This is where the "Pragmatic CTO" shines. Why pay a premium for AWS Lambda or Azure Functions when you can run OpenFaaS on top of Kubernetes or even plain Docker Swarm?
Running OpenFaaS on CoolVDS infrastructure offers three distinct advantages:
- Cost Predictability: You pay for the VPS resources (CPU/RAM). You don't pay per million invocations. If your function gets hit by a DDoS, your wallet doesn't bleed.
- Performance: Our KVM instances use local NVMe storage. This drastically reduces the I/O latency associated with pulling container images, minimizing cold start times compared to network-attached block storage used by many public clouds.
- Data Residency: You can legally assert that the data processing occurs in Norway (or your chosen EU location), on a server you control.
Pro Tip: For high-throughput functions, tweak theread_timeoutandwrite_timeoutin the OpenFaaS gateway to avoid premature disconnects during heavy processing tasks like image resizing.
Deploying the Stack
Here is a reproducible setup for a robust FaaS platform on a fresh CoolVDS instance using faasd (a lightweight version of OpenFaaS for those who don't want the complexity of Kubernetes).
# 1. Install containerd (Standard for 2022)
curl -sLSf https://github.com/containerd/containerd/releases/download/v1.6.8/containerd-1.6.8-linux-amd64.tar.gz > /tmp/containerd.tar.gz
tar -xvf /tmp/containerd.tar.gz -C /usr/local/bin/
# 2. Install CNI plugins
mkdir -p /opt/cni/bin
curl -sLSf https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz | tar -xz -C /opt/cni/bin
# 3. Install faasd
curl -sfL https://github.com/openfaas/faasd/releases/download/0.1.4/faasd -o /usr/local/bin/faasd
chmod +x /usr/local/bin/faasd
# 4. Initialize
/usr/local/bin/faasd install
# 5. Check status
journalctl -u faasd -f
Once installed, you have a fully functional serverless platform. You can deploy functions using the CLI just like you would with a public cloud, but the "cloud" is your own dedicated slice of hardware.
The Function Code (Python 3)
Let's look at a simple handler that interacts with a local database. Notice how we use environment variables for secretsânever hardcode credentials.
import os
import json
import psycopg2
def handle(req):
"""handle a request to the function
Args:
req (str): request body
"""
# Efficient connection handling is crucial in FaaS
try:
conn = psycopg2.connect(
host=os.getenv("postgres_host", "10.0.0.2"),
database="orders",
user=os.getenv("postgres_user"),
password=os.getenv("postgres_password")
)
cur = conn.cursor()
# processing logic here...
payload = json.loads(req)
cur.close()
conn.close()
return json.dumps({"status": "success", "processed": True})
except Exception as e:
return json.dumps({"status": "error", "message": str(e)})
The Hardware Reality Check
Software patterns fail without hardware support. Serverless relies heavily on rapid container creation and destruction. This is I/O intensive. If your underlying VPS is running on spinning rust (HDD) or shared SATA SSDs with noisy neighbors, your "serverless" architecture will feel sluggish.
We engineered CoolVDS specifically for these high-churn workloads. By utilizing enterprise-grade NVMe drives and strict KVM isolation, we ensure that when your function needs CPU cycles or disk I/O, the resources are there. No steal time. No waiting.
| Feature | Public Cloud FaaS | Self-Hosted on CoolVDS |
|---|---|---|
| Cold Start Latency | Variable (200ms - 2s) | Consistent (optimized by you) |
| Execution Time Limit | Strict (usually 15 min) | Unlimited |
| Data Location | Opaque (Region based) | Specific Data Center (e.g., Oslo) |
| Cost Model | Per Request (Unpredictable) | Flat Monthly Rate |
Conclusion
Serverless is powerful, but it shouldn't mean powerless. By adopting patterns like Queue-Based Leveling and the Strangler Fig, you build resilience. By hosting your FaaS infrastructure on CoolVDS, you regain control over costs and compliance.
Don't let the cloud giants dictate your architecture. Build a platform that serves your business, not their billing department.
Ready to take control? Deploy a high-performance NVMe KVM instance on CoolVDS today and build your own sovereign serverless stack in minutes.