Beyond the Hype: Practical Serverless Patterns for 2019
Let’s clear the air immediately: Serverless does not mean there are no servers. It means you are renting someone else's servers by the millisecond, often at a premium markup once you scale beyond the free tier. If you are a CTO or a Lead Architect in Oslo right now, you are probably being bombarded with slide decks about how moving everything to Lambda or Azure Functions will magically solve your operational headaches. It won't.
The "NoOps" dream is a lie. You still have to monitor execution times, manage cold starts, and worry about the sheer chaos of distributed debugging. However, the architectural patterns introduced by the serverless movement—event-driven architectures, ephemeral compute, and immutable infrastructure—are brilliant. The trick is implementing them without selling your soul (and your budget) to a hyperscale cloud provider.
The Latency Trap in the Nordics
If your users are sitting in Norway, routing traffic to a public cloud region in Frankfurt or Ireland introduces unnecessary latency. We are talking about physics. Light can only travel so fast. When you rely on a public cloud FaaS (Function as a Service) provider, you also contend with "noisy neighbors" on a massive scale. Their load balancers are black boxes.
For Norwegian businesses, data sovereignty is also critical. Under GDPR (and the watchful eye of Datatilsynet), knowing exactly where your data is processed is non-negotiable. Running your own FaaS platform on local infrastructure solves the latency issue and the compliance headache simultaneously.
Pro Tip: Network latency within Norway via NIX (Norwegian Internet Exchange) is typically under 10ms. Routing to central Europe can triple that. If your application handles real-time data, that difference is palpable.
Pattern: The Hybrid FaaS (Self-Hosted)
In 2019, the most robust pattern for serious engineering teams is Self-Hosted Serverless. We use a container orchestrator (Docker Swarm or Kubernetes) running a framework like OpenFaaS. This gives you the "git push to deploy" experience but runs on hardware you control.
Why do this on a VPS? Predictable costs and raw I/O.
When you run OpenFaaS on a CoolVDS NVMe instance, you aren't fighting for disk IOPS. You get dedicated resources. Below is a battle-tested configuration we used recently to migrate a legacy image processing pipeline from a monolith to functions.
Step 1: The Foundation
We start with a clean KVM instance. Avoid OpenVZ for this; we need proper kernel isolation for Docker. Assuming you are running Ubuntu 18.04 LTS:
# Install Docker CE (Standard 2019 procedure)
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io
# Initialize Swarm (Simpler than K8s for small-medium teams)
docker swarm init --advertise-addr $(hostname -i)
Step 2: Deploying the Serverless Framework
We prefer OpenFaaS because it's language-agnostic and fits perfectly into existing Docker workflows. We don't need complex IAM roles just to write a "Hello World".
# Install the CLI
curl -sL https://cli.openfaas.com | sudo sh
# Clone the stack
git clone https://github.com/openfaas/faas
cd faas && ./deploy_stack.sh
Once deployed, your gateway is exposed. But here is where the "Battle-Hardened" part comes in. You strictly need to tune the gateway for high throughput if you expect bursts of traffic. The default settings are too conservative for production.
Step 3: Tuning Nginx for Function Gateways
The OpenFaaS gateway uses Nginx under the hood. If you run this on CoolVDS, you have the CPU cycles to handle high concurrency, so don't let the software throttle you. We often inject a custom configuration:
user nginx;
worker_processes auto; # Let it use all vCPUs provided by CoolVDS
events {
worker_connections 10240;
use epoll;
}
http {
# ... standard includes ...
# OPTIMIZATION: Keepalive connections to upstream functions
upstream gateway {
server 127.0.0.1:8080;
keepalive 64;
}
server {
location / {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Connection "";
# CRITICAL: Increase timeouts for long-running batch jobs
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
}
}
The "Async" Pattern for Heavy Lifting
A common mistake is treating functions like standard HTTP endpoints. If a user uploads a file and you try to process it synchronously, you will hit timeouts. The better pattern is Async Processing with NATS (built into OpenFaaS).
When you POST to /async-function/my-process, the system acknowledges immediately (HTTP 202), and the work is queued. This is where disk I/O becomes paramount. The queue needs fast storage to persist messages.
On standard spinning rust (HDD) VPS, the queue depth increases, and latency spikes. On CoolVDS NVMe storage, the read/write speeds allow the queue to drain almost as fast as it fills. We've seen a 400% improvement in queue drain times simply by switching from standard SSD cloud instances to high-performance NVMe KVM slices.
Security: The Norway Advantage
By hosting this architecture in Norway, you simplify your GDPR compliance map. You aren't navigating the Privacy Shield minefield. Your customer data stays in Oslo. Furthermore, CoolVDS offers DDoS protection at the network edge. Even if your functions scale up, a volumetric attack won't saturate your uplink.
Is this right for you?
If you just need to host a static blog, this is overkill. But if you are building an event-driven microservices architecture and you refuse to pay the "AWS Tax" or suffer the latency penalties of routing traffic to Frankfurt, this pattern is the answer.
You get the developer experience of serverless with the raw power and cost-efficiency of a VPS. It requires a bit more initial setup, but the long-term stability and performance gains are undeniable.
Don't let cold starts and high latency kill your application's user experience. Spin up a CoolVDS instance today, install Docker, and take back control of your infrastructure.