Serverless Architectures in 2020: Escaping the Public Cloud Lock-in Trap
There is a dangerous misconception circulating in CTO circles from Oslo to Trondheim: that "Serverless" equals "AWS Lambda" or "Azure Functions." This fallacy is expensive. It costs you in data sovereignty, it costs you in "cold start" latency, and eventually, it costs you in a monthly bill that scales far faster than your revenue.
As a Systems Architect who has migrated three major platforms off public cloud FaaS (Function-as-a-Service) this year alone, I’m here to tell you that Serverless is an operational model, not a product you buy. It is about abstracting infrastructure, not abandoning control over it.
For Norwegian businesses dealing with sensitive customer data and the looming uncertainty regarding the Privacy Shield framework (the Schrems II hearings are making everyone nervous), relying entirely on US-owned cloud providers is a strategic risk. The pragmatic alternative? Self-hosted Serverless on high-performance KVM infrastructure.
The Latency & Compliance Reality Check
Let's talk physics. If your users are in Norway, and your "Serverless" functions are firing up in an AWS data center in Frankfurt or Stockholm, you are fighting the speed of light. You are also fighting the "cold start" penalty—the time it takes for the provider to provision a container for your code. On public clouds, you have zero control over the underlying hardware. You might get a noisy neighbor, or a slow spinning disk.
When we deploy onto a provider like CoolVDS, we control the stack. We use KVM (Kernel-based Virtual Machine) virtualization which guarantees that our CPU cycles are ours alone. More importantly, we get direct access to local NVMe storage. In a serverless architecture where containers are created and destroyed in milliseconds, disk I/O is the bottleneck nobody talks about.
Pro Tip: NVMe storage isn't just a luxury; for FaaS, it's a requirement. The difference between initializing a NodeJS runtime on SSD vs NVMe can be the difference between a 200ms response and a 50ms response. CoolVDS NVMe instances effectively eliminate I/O wait times during high-concurrency function scaling.
Architecture Pattern: The "Private FaaS" with OpenFaaS
Instead of locking your business logic into proprietary `template.yaml` files for CloudFormation, use OpenFaaS. It allows you to package any binary or container as a serverless function on top of Docker Swarm or Kubernetes. It runs anywhere—including a cost-effective VPS Norway instance.
Here is a production-ready setup we deployed last month for an image processing pipeline. It uses Docker Swarm for orchestration because, frankly, Kubernetes is often overkill for teams smaller than 20 people.
1. The Infrastructure Layer
We provisioned three CoolVDS instances (Ubuntu 18.04 LTS) connected via a private network. This ensures internal traffic between the gateway and the workers doesn't hit the public interface.
2. The OpenFaaS Deployment
Deploying the stack is trivial. First, initialize the Swarm on your manager node:
$ docker swarm init --advertise-addr <PRIVATE_IP>
Then, deploy the OpenFaaS stack. Note the placement constraints to ensure the gateway runs on the manager while heavy compute functions land on the workers.
version: '3.7'
services:
gateway:
image: openfaas/gateway:0.18.17
ports:
- "8080:8080"
networks:
- functions
deploy:
placement:
constraints: [node.role == manager]
resources:
limits:
memory: 200M
reservations:
memory: 100M
# The queue worker handles asynchronous processing
queue-worker:
image: openfaas/queue-worker:0.9.0
networks:
- functions
environment:
faas_nats_address: "nats:4222"
deploy:
placement:
constraints: [node.role == worker]
networks:
functions:
driver: overlay
attachable: true
3. Optimizing the Nginx Gateway
If you are exposing this to the web, you need Nginx in front. A common mistake is leaving the default buffer sizes, which causes failures when functions return large JSON payloads or images. On your CoolVDS ingress node, tweak `nginx.conf` specifically for FaaS workloads:
http {
# ... basic settings ...
# Essential for handling larger payloads typical in batch processing
client_max_body_size 50M;
client_body_buffer_size 50M;
# Keepalives reduce the TCP handshake overhead for frequent API calls
upstream openfaas_upstream {
server 127.0.0.1:8080;
keepalive 32;
}
server {
listen 80;
server_name functions.your-domain.no;
location / {
proxy_pass http://openfaas_upstream;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
# Aggressive timeouts. If a function hangs, kill it fast.
proxy_read_timeout 60s;
proxy_connect_timeout 5s;
}
}
}
The Economic Argument: TCO Analysis
Let's run the numbers. A typical AWS Lambda setup charges you per 100ms of execution and per request. This is fine for idle apps. But for a sustained workload—say, an e-commerce backend processing order events—the costs explode.
| Cost Factor | Public Cloud FaaS | Private FaaS on CoolVDS |
|---|---|---|
| Compute | Variable (Expensive at scale) | Fixed (Predictable) |
| Data Egress | High markup | Included / Low cost |
| Storage I/O | Throttled by tier | Dedicated NVMe speeds |
| Compliance | US Cloud Act Risk | Norwegian Sovereignty |
With a fixed-cost Virtual Dedicated Server, you know exactly what your invoice will be at the end of the month. You can run your functions 24/7 at 100% CPU utilization without penalty. CoolVDS offers plans where the resource-to-price ratio simply makes public cloud math look absurd for steady-state workloads.
Handling State in a Stateless World
The biggest challenge in serverless isn't code; it's state. Functions are ephemeral. Where do you put the data? In 2020, the answer is Redis, but it must be close to the compute.
Running a Redis instance on the same LAN as your function workers is critical. We use `sysctl` tuning to ensure the Linux kernel handles the high rate of TCP connections that FaaS architectures generate.
# Add to /etc/sysctl.conf on your Database Node
# Allow more connections to be queued
net.core.somaxconn = 65535
# Reuse sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1
# Increase ephemeral port range for heavy outgoing connections
net.ipv4.ip_local_port_range = 1024 65000
Apply these with `sysctl -p`. These settings prevent connection exhaustion when thousands of functions try to hit your database simultaneously.
Conclusion: Own Your Architecture
The allure of "NoOps" is a myth. Someone is always managing the server; the question is whether you are paying them a premium to hide it from you. By architecting your own serverless platform using tools like OpenFaaS on robust, local infrastructure, you gain three things: Speed (thanks to local peering and NVMe), Privacy (keeping data within Norwegian borders), and Cost Control.
Don't let the cloud giants dictate your architecture. Build something that lasts.
Ready to build a private FaaS cluster that screams? Deploy a high-frequency KVM instance on CoolVDS today and see what unthrottled NVMe can do for your cold starts.