Serverless Without the Lock-in: Architecture Patterns for 2020
Let’s be honest for a moment. "Serverless" is a marketing term that has done a fantastic job of confusing executives and frustrating engineers. It implies that infrastructure doesn't matter. But as we step into 2020, seasoned CTOs and DevOps leads know the truth: Serverless is just someone else's servers, usually managed by a US giant, often with opaque pricing models and variable latency.
I recently audited a setup for a logistics firm in Oslo. They went "all-in" on AWS Lambda, expecting their operational costs to vanish. Instead, they faced the infamous "cold start" problem—adding 2-3 seconds of latency to critical barcode scans—and a monthly bill that was difficult to forecast. The data was routing through Frankfurt, raising eyebrows with their legal team regarding the strict interpretation of Norwegian data handling norms.
This guide isn't about rejecting the serverless paradigm. It's about adopting the patterns—event-driven architecture, ephemeral compute, auto-scaling—without necessarily chaining yourself to a proprietary public cloud ecosystem. We will look at how to implement these patterns on robust, self-hosted infrastructure like KVM-based VPS, specifically tailored for the Nordic market.
The Architecture: Hybrid FaaS
In 2020, the most robust pattern for European businesses is not pure public cloud FaaS, but Self-Hosted FaaS. Tools like OpenFaaS or Knative running on top of Kubernetes (or even Docker Swarm) allow you to define functions without managing the OS heavily, yet you retain control over the hardware, the network, and the data residency.
Why this matters for Norway
If your users are in Oslo or Bergen, routing traffic to a hyperscaler's data center in Ireland or Germany adds unavoidable RTT (Round Trip Time). By deploying an OpenFaaS cluster on a provider like CoolVDS in Norway, you slash network latency to milliseconds. Furthermore, you satisfy the Datatilsynet (Norwegian Data Protection Authority) requirements by keeping personal data within national borders, a concern that is only growing with current privacy litigations.
Implementation: The "Gateway Aggregation" Pattern
One of the most effective serverless patterns is using a robust API Gateway to aggregate calls to multiple micro-functions. Instead of a monolithic API, you break endpoints into discrete functions.
Here is a practical configuration for deploying a high-performance function using OpenFaaS on a standard KVM VPS. We assume you have Docker installed (v19.03 is the current standard stable).
1. The Function Definition
Instead of proprietary `sam.yaml`, we use the vendor-neutral `stack.yml`. This defines our function, its image, and autoscaling labels.
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
order-processor:
lang: node12
handler: ./order-processor
image: registry.example.com/order-processor:0.2.1
labels:
com.openfaas.scale.factor: 20
com.openfaas.scale.min: 2
com.openfaas.scale.max: 15
environment:
write_debug: true
read_timeout: 10s
write_timeout: 10s
Pro Tip: Notice the com.openfaas.scale.min: 2 label. Unlike public cloud FaaS, running your own infrastructure allows you to keep "warm" replicas running 24/7 without incurring massive extra costs. This eliminates the cold-start latency that plagues generic serverless platforms.
2. The Handler (Node.js 12)
The code remains clean. You focus on logic, not the HTTP server boilerplate.
"use strict"
module.exports = async (event, context) => {
const result = {
status: "received",
timestamp: new Date().toISOString()
};
// Simulate strict DB processing time
// In production, this connects to a local NVMe-backed Redis
await new Promise(resolve => setTimeout(resolve, 50));
return context
.status(200)
.headers({ "Content-Type": "application/json" })
.succeed(result);
}
Optimizing the Underlying Iron
The software layer is only half the battle. If your virtualization layer suffers from "noisy neighbors" or I/O wait, your functions will hang regardless of how efficient your code is. This is where the choice of VPS becomes architectural, not just procurement.
In a containerized environment, Disk I/O is the bottleneck. When 50 functions spin up simultaneously, they all pull images and write logs. If you are on standard spinning rust (HDD) or shared SATA SSDs, your system load spikes.
Kernel Tuning for FaaS Workloads
On your CoolVDS instance (CentOS 7 or Ubuntu 18.04 LTS), you must tune `sysctl.conf` to handle the rapid creation and destruction of network connections typical in serverless architectures.
# /etc/sysctl.conf optimizations for high-concurrency FaaS
# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535
# Reuse Transfer Control Protocol (TCP) connections
net.ipv4.tcp_tw_reuse = 1
# Increase max open files for high concurrency
fs.file-max = 2097152
# Improve virtual memory handling for Redis/FaaS buffers
vm.overcommit_memory = 1
vm.swappiness = 10
Apply these changes with `sysctl -p`. These settings are crucial when you are pushing thousands of events per second through an Nginx ingress controller in front of your function provider.
Data Persistence Patterns
Stateless functions are great, but state has to go somewhere. The common mistake is connecting every function directly to a relational database (like MySQL). This exhausts connection pools instantly.
The Fix: HTTP Connection Pooling or Redis.
For a robust 2020 stack, deploy a Redis instance on the same private network (VLAN) as your FaaS cluster. CoolVDS offers private networking which isolates this traffic from the public internet, adding a layer of security essential for GDPR compliance.
# Example: Starting a persistent Redis container with NVMe backing
docker run -d --name redis-cache \
-v /var/lib/redis:/data \
--network=private_vlan \
--sysctl net.core.somaxconn=1024 \
redis:5.0-alpine redis-server --appendonly yes
The Economic Argument: TCO
Let's run the numbers. A typical heavy workload on AWS Lambda (128MB memory, 30 million executions/month) can easily run into hundreds of dollars once you factor in API Gateway fees and Data Transfer Out costs.
| Cost Factor | Public Cloud FaaS | Self-Hosted (CoolVDS) |
|---|---|---|
| Compute | Pay per 100ms (Expensive at scale) | Flat monthly rate (Predictable) |
| Data Egress | High $/GB markup | Generous TB allowances |
| Latency (to Norway) | 30-50ms (Frankfurt) | <5ms (Local) |
| Cold Starts | Unavoidable without paying extra | Zero (Control your replicas) |
By leveraging a high-performance VPS, you cap your downside. You pay for the resources (CPU Cores, RAM, NVMe), not the invocations. If a script goes rogue and loops infinitely, you don't wake up to a bankruptcy-level bill; you just hit 100% CPU on a capped instance until you kill the process.
Conclusion
Serverless is an architectural pattern, not a billing model. You do not need to surrender control of your infrastructure to benefit from event-driven design. By running OpenFaaS or similar tools on CoolVDS, you gain the best of both worlds: the developer velocity of FaaS and the raw performance, cost-control, and data sovereignty of bare-metal virtualization.
If you are building for the Nordic market, latency and legality are not optional features. They are the foundation. Don't let your architecture be dictated by the defaults of a US cloud provider.
Ready to build a compliant, high-speed serverless cluster? Deploy a KVM NVMe instance on CoolVDS today and get full root access in under 55 seconds.