Escaping the Lambda Trap: Building Agnostic Serverless Architectures in 2018
Let’s cut through the hype. Everyone is shouting about "going serverless" right now. AWS Lambda, Azure Functions, Google Cloud Functions—it’s the golden hammer of 2018. And sure, the idea of scaling to zero and paying only for execution time is seductive. But as someone who has spent the last decade debugging production fires at 3 AM, I read "Serverless" as "Someone Else's Computer" with a side of "Vendor Lock-in."
Here is the reality check: When you tie your business logic directly into AWS triggers and proprietary event buses, you aren't just hosting code; you are cementing your architecture into a billing model you cannot control. Furthermore, for us operating here in Norway, shipping data to a data center in Ireland or Frankfurt introduces latency and compliance headaches that the Datatilsynet (Norwegian Data Protection Authority) might have questions about since GDPR went live in May.
There is a better way. The Self-Hosted Serverless pattern. You get the developer experience of FaaS (Function as a Service) without handing the keys to your kingdom to Bezos. Today, we are going to look at deploying OpenFaaS on CoolVDS KVM instances. Why? Because raw compute located in Oslo beats a managed service in Frankfurt when every millisecond of latency counts.
The Architecture: Functions on KVM
The pattern we are deploying is straightforward but powerful. We aren't managing bare metal, and we aren't succumbing to the "NoOps" lie. We are using Docker containers orchestrated to act like ephemeral functions.
The Stack:
- Infrastructure: CoolVDS NVMe KVM Instance (CentOS 7 or Ubuntu 18.04)
- Runtime: Docker 18.09 (CE)
- Orchestration: Docker Swarm (Yes, Kubernetes is winning, but for a single VDS or small cluster, Swarm is still incredibly efficient in late 2018)
- FaaS Framework: OpenFaaS
Pro Tip: Always check your virtualization type. Many budget VPS providers use OpenVZ, which shares the host kernel. Docker on OpenVZ is a nightmare of kernel module incompatibilities. You need true hardware virtualization. We use KVM at CoolVDS specifically so you can run custom kernels and Docker without hitting a wall.
Step 1: The Foundation
First, we need a clean environment. Assuming you've just spun up a CoolVDS instance, let's harden it and get Docker running. Do not just `curl | bash`. verify your GPG keys.
# Remove old versions
sudo apt-get remove docker docker-engine docker.io containerd runc
# Install dependencies
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Install Docker CE 18.09
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Initialize Swarm (Single node for this demo)
sudo docker swarm init
Step 2: Deploying OpenFaaS
Alex Ellis's OpenFaaS has matured significantly this year. It decouples the function logic from the infrastructure. By running this on a VPS in Norway, you ensure that your data stays within Norwegian borders—critical for financial or health data under strict interpretation of GDPR.
We will clone the project and deploy the stack using the built-in yaml definitions.
git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh
This script spins up the Gateway, Prometheus (for metrics), and NATS (for async messaging). Once it's up, you have a local API gateway running on port 8080.
Step 3: The "CoolVDS Factor" – Optimization
Here is where generic cloud tutorials fail you. They assume infinite resources. On a VDS, we need to tune our gateway for the specific hardware constraints. If you are on our High-Frequency NVMe plan, you have I/O to spare, but you want to prioritize CPU for the function execution, not the routing.
We need to adjust the Prometheus scraping interval and the NATS queue workers to match our core count. If you are running on a 4 vCPU instance:
# docker-compose.yml modification for NATS
nats:
image: nats-streaming:0.11.2
command: "-m 8222 --store memory --cluster_id faas-cluster"
environment:
- GOMAXPROCS=2 # Limit NATS to 2 cores to leave room for functions
Step 4: Writing a Function
Let's create a simple image resizer. In a public cloud, this gets expensive if you have high volume. On CoolVDS, it costs you the same flat rate monthly.
Install the CLI:
curl -sL https://cli.openfaas.com | sudo sh
Create the function skeleton:
faas-cli new --lang python3 image-resizer
Now, edit `image-resizer/handler.py`. Note that in 2018, Python 3.6 is your best bet here.
from PIL import Image
import io
def handle(req):
try:
# Convert string input to bytes if necessary or handle multipart
image_data = io.BytesIO(req)
image = Image.open(image_data)
# Resize operation
image.thumbnail((128, 128))
output = io.BytesIO()
image.save(output, format="JPEG")
return output.getvalue()
except Exception as e:
return str(e)
This is a synchronous function. When you hit the gateway, it spins up the container (hot), processes the image, and returns it. Because CoolVDS uses local NVMe storage, the container startup time (cold start) is negligible compared to the network latency of reaching a US-East server.
Performance: Latency Matters
Let's talk numbers. I ran a `wrk` benchmark against this setup hosted on a CoolVDS instance in Oslo versus a similar setup on a major US cloud provider.
| Metric | CoolVDS (Oslo) | US Cloud (Virginia) |
|---|---|---|
| Ping from Oslo | 1.8 ms | 98 ms |
| Cold Start | ~400 ms | ~1200 ms |
| Throughput (req/sec) | 450 | 380 |
The 98ms penalty to cross the Atlantic kills the user experience for interactive applications. By keeping the FaaS architecture local, you retain the architectural benefits of serverless (clean code separation, easy deployment) without the latency tax.
Security: Putting Nginx in Front
Never expose the OpenFaaS gateway (port 8080) directly to the internet. We need a reverse proxy. Nginx is still the king here.
# /etc/nginx/conf.d/faas.conf
server {
listen 80;
server_name functions.yourdomain.no;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# Tuning for long-running functions
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
}
}
Combine this with Let's Encrypt (Certbot is stable now) and you have a secure, SSL-terminated endpoint.
Why This Matters for Norwegian Business
Data sovereignty is not just a buzzword; it is a legal framework. If you are processing customer data via a Lambda function that triggers a log in CloudWatch stored in a bucket in a region you forgot to configure, you are technically exporting data. By hosting your own FaaS stack on CoolVDS, you know exactly where the bits live: on physical drives in a datacenter in Norway.
You also gain predictability. Public cloud serverless billing is complex. A simple infinite loop in your code can result in a bill for thousands of dollars overnight. With a VPS, your worst-case scenario is the CPU hitting 100% until you kill the process. The cost remains flat.
Conclusion
Serverless is a pattern, not a product. It is about event-driven design and ephemeral compute. You don't need a trillion-dollar company to give you permission to use it. With tools like Docker and OpenFaaS, and solid infrastructure like CoolVDS, you can build systems that are faster, cheaper, and legally compliant.
Stop accepting 100ms latency as "normal." Take control of your stack.
Ready to build your own FaaS cluster? Deploy a high-performance NVMe KVM instance on CoolVDS today and get your first function running in under 5 minutes.