Serverless Without the Chains: Self-Hosted FaaS Patterns for Nordic Ops
The promise was intoxicating: "Just write code. We handle the rest." We all bought into the serverless dream around 2017. But here we are in August 2024, and the hangover has set in. I recently audited a fintech setup in Oslo where their AWS Lambda bill for image processing was rivalling their payroll. Worse, the cold start latency—spiking up to 2 seconds for Java runtimes—was unacceptable for a real-time payment gateway.
The issue isn't the pattern of serverless (event-driven, ephemeral compute); it's the implementation. Public cloud FaaS implies vendor lock-in, opaque pricing models, and data sovereignty headaches under GDPR and Schrems II. If you are serving Norwegian customers, routing traffic through Frankfurt or Stockholm adds milliseconds you can't afford.
There is a better way. By decoupling the architecture from the provider, we can run FaaS (Functions as a Service) on high-performance VPS Norway infrastructure. You get the developer experience of serverless with the raw I/O performance of local NVMe storage.
The Architecture: The "Iron-FaaS" Hybrid
In a standard public cloud model, you have zero control over the underlying hardware. In the "Iron-FaaS" model, we use a lightweight Kubernetes distribution (like K3s) atop high-frequency compute instances to orchestrate our functions. This gives us predictable costs and, crucially, data locality.
Pattern 1: The Async Worker Pool
This is the bread and butter of backend operations. You have a heavy task (generating PDFs, resizing images, processing CSV uploads) that shouldn't block the main thread. Instead of spinning up a permanent worker server that sits idle 80% of the time, or paying per-millisecond premiums to a hyperscaler, we deploy an event listener.
We use NATS for the message queue and OpenFaaS for the execution. Why OpenFaaS? Because it's container-centric and runs beautifully on standard Linux VPS instances.
Here is how you set up the queue connector on a CoolVDS instance running K3s:
helm repo add openfaas https://openfaas.github.io/faas-netes/
helm upgrade openfaas --install openfaas/openfaas \
--namespace openfaas \
--set functionNamespace=openfaas-fn \
--set generateBasicAuth=true \
--set queueWorker.ackWait=60sThis configuration sets the acknowledgment wait time to 60 seconds, preventing long-running tasks from timing out prematurely—a common pain point with default managed serverless settings.
Pattern 2: The "Sidecar" Proxy
Sometimes you need to transform requests before they hit your legacy monolith. Implementing this in a monolithic Nginx config is a nightmare. Using a serverless function as a sidecar proxy is cleaner.
In this pattern, the request hits your load balancer, which forwards specific paths (`/api/v2/transform`) to your function cluster. The function sanitizes the payload and forwards it to the legacy backend.
Pro Tip: Latency is the enemy here. Public cloud FaaS introduces a network hop that can add 50-100ms. By running this on CoolVDS instances within the same local network or even the same hypervisor node, you reduce that internal latency to near-zero.
The Technical Implementation: Building the Stack
Let's build a robust FaaS node. We aren't using heavy OpenShift here; we want efficiency. We will use K3s because it strips out the bloat of standard Kubernetes, making it perfect for VPS environments with 4GB to 8GB RAM.
Step 1: The Base OS Tuning
Before installing anything, we must optimize the Linux kernel for high-throughput networking. Default sysctl settings are too conservative for event-driven architectures.
Edit /etc/sysctl.conf:
# Increase connection tracking for heavy concurrent function invocations
net.netfilter.nf_conntrack_max = 131072
# Optimize for low latency
net.ipv4.tcp_low_latency = 1
# Allow more local port range for outbound connections from functions
net.ipv4.ip_local_port_range = 1024 65535
# Boost TCP buffer sizes for faster data transfer
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216Apply with sysctl -p.
Step 2: Deploying the Function Store
We need a place to store our function code. MinIO is the standard S3-compatible object storage for self-hosted setups. It utilizes the NVMe storage natively.
docker run -d -p 9000:9000 -p 9001:9001 \
--name minio \
-v /mnt/nvme/data:/data \
-e "MINIO_ROOT_USER=cooladmin" \
-e "MINIO_ROOT_PASSWORD=SuperSecretKey2024!" \
quay.io/minio/minio server /data --console-address ":9001"Note the volume mount /mnt/nvme/data. If you are not using NVMe storage, don't bother. Rotating rust (HDD) cannot handle the random I/O spikes generated by hundreds of containers starting and stopping simultaneously. This is why CoolVDS enforces NVMe on all high-performance tiers.
Step 3: The Function Definition
Let's look at a stack.yml for a Python function that handles GDPR deletion requests—a critical requirement for Norwegian businesses.
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
gdpr-cleanup:
lang: python3-http
handler: ./gdpr-cleanup
image: registry.coolvds-user.no/gdpr-cleanup:latest
environment:
write_debug: true
combine_output: false
secrets:
- db-password
annotations:
com.openfaas.scale.min: 1
com.openfaas.scale.max: 20The annotation com.openfaas.scale.min: 1 is crucial. It keeps one instance "warm" at all times. This eliminates cold starts entirely, something that costs a fortune on AWS Lambda (Provisioned Concurrency) but costs next to nothing on your own VPS.
Why Infrastructure Matters: The CoolVDS Factor
You might ask, "Why not just use DigitalOcean or Linode?" It comes down to the Steal Time and Neighbor Noise. In a shared cloud environment, your "vCPU" is often a sliver of a thread competing with fifty other tenants. When your function needs to wake up instantly to process a webhook, CPU wait time is unacceptable.
CoolVDS architectures are designed for low latency and high stability. We utilize KVM virtualization which provides stricter isolation than container-based virtualization (like OpenVZ or LXC). When you allocate 4 cores on a CoolVDS instance, those cycles are yours. This consistency is mandatory for FaaS, where execution time is the metric of success.
Data Sovereignty & The Norwegian Context
Since the Schrems II ruling, transferring personal data to US-owned cloud providers has become a legal minefield. Even if the server is in Oslo, if the company is US-based (like Amazon or Microsoft), they fall under the CLOUD Act. Hosting your serverless architecture on a European provider like CoolVDS, with servers physically located in Norway and owned by a European entity, drastically simplifies your GDPR compliance stance.
Conclusion
Serverless is a powerful architectural pattern, but it shouldn't hold your budget or your data hostage. By shifting FaaS workloads to a self-hosted Kubernetes environment on robust VPS infrastructure, you reclaim control. You get the sub-millisecond low latency needed for modern apps, the ddos protection inherent in our network edge, and the peace of mind that comes with data sovereignty.
Stop renting execution time at a markup. Build your own execution engine.
Ready to build your Iron-FaaS cluster? Deploy a high-frequency NVMe instance on CoolVDS in under 55 seconds and start scaling on your own terms.