The Serverless Myth: Implementing High-Performance FaaS Patterns on Bare Metal in Norway
Let’s get one thing straight immediately: "Serverless" is a lie. There are always servers. The only variable is whether you control them, or if you're renting execution time from a giant conglomerate in Frankfurt or Virginia at a 400% markup.
I recently audited a stack for a robust e-commerce client based here in Oslo. They bought the hype. They moved their entire image processing pipeline to a managed function-as-a-service (FaaS) provider. The result? A monthly bill that looked like a mortgage payment and random latency spikes of 500ms when their functions hit "cold starts."
In the Nordics, where we pride ourselves on engineering efficiency and data sovereignty, we can do better. We can build Serverless patterns without the Serverless tax.
By leveraging container orchestration on high-performance VPS infrastructure, specifically utilizing the low-latency peering available in Norway, we gain control, compliance (hello, Datatilsynet), and raw speed.
The Architecture: Self-Hosted FaaS
In late 2019, the ecosystem has matured. We don't need to manually stitch scripts together. We have OpenFaaS. It allows us to containerize functions and deploy them on top of Docker Swarm or Kubernetes. This gives us the event-driven architecture developers love, with the fixed-cost predictability CFOs demand.
Why run this on a VPS instead of a dedicated server? Scalability. You can spin up a CoolVDS KVM instance in seconds to add worker nodes, rather than waiting days for hardware provisioning.
The Stack
- Infrastructure: CoolVDS NVMe Instances (CentOS 7 or Ubuntu 18.04 LTS)
- Orchestrator: Docker Swarm (or K3s if you're feeling adventurous with lightweight Kubernetes)
- FaaS Framework: OpenFaaS
- Gateway: Nginx
Step 1: The Foundation
Latency kills conversion. If your servers are in Ireland and your customers are in Trondheim, you are fighting physics. You will lose. Hosting on local infrastructure ensures your ping remains single-digit.
First, we initialize the swarm on our manager node. I recommend a CoolVDS instance with at least 4GB RAM to handle the orchestration overhead smoothly.
# On the Manager Node (CoolVDS Instance 1)
$ docker swarm init --advertise-addr $(hostname -i)
# Output will provide a join token for workers
Swarm initialized: current node (dxn1...) is now a manager.
We use KVM virtualization here. This is non-negotiable. Containers on OpenVZ or LXC share the host kernel too intimately. For security isolation and proper resource allocation (avoiding the "noisy neighbor" effect common in cheap hosting), KVM is the industry standard.
Step 2: Deploying the Serverless Framework
We will deploy OpenFaaS using `arkade` or plain `git` cloning. For transparency, let's look at the raw stack deployment.
$ git clone https://github.com/openfaas/faas
$ cd faas
$ ./deploy_stack.sh
This creates a gateway, a Prometheus instance for monitoring metrics, and the NATS queue system for asynchronous processing. It’s a complete event-driven ecosystem.
Pro Tip: Monitor your innodb_buffer_pool_size if you attach a SQL database to this stack. Even in a microservices world, default MySQL settings are garbage. Set it to 70% of your available RAM on the database node.
Step 3: The Function Pattern
Let's write a Python 3 function that resizes images—a classic CPU-bound task that gets expensive on public cloud FaaS.
# handler.py
import io
from PIL import Image
def handle(req):
try:
# Assume req is binary image data
image = Image.open(io.BytesIO(req))
image.thumbnail((128, 128))
img_byte_arr = io.BytesIO()
image.save(img_byte_arr, format=image.format)
return img_byte_arr.getvalue()
except Exception as e:
return str(e)
Now, the configuration file stack.yml defines how this behaves on our infrastructure.
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
img-resize:
lang: python3
handler: ./img-resize
image: myrepo/img-resize:latest
environment:
write_debug: true
# Controlling resource limits is crucial on a VPS
limits:
memory: 128m
cpu: 0.2
The NVMe Factor: Why Disk I/O Matters
This is where most implementations fail. When you deploy a function, the container runtime must pull the image, unpack layers, and mount the filesystem.
On a standard HDD or SATA SSD, this creates an I/O bottleneck known as "IO wait." Your CPU sits idle, waiting for the disk to wake up. This manifests as latency.
CoolVDS uses NVMe storage. The throughput difference is not trivial; it's exponential. We are talking about 3000+ MB/s read speeds versus 500 MB/s on SATA. When you are spinning up 50 containers simultaneously to handle a traffic spike, that storage speed is the difference between a 200 OK and a 504 Gateway Timeout.
| Feature | Managed Public Cloud FaaS | Self-Hosted on CoolVDS (KVM) |
|---|---|---|
| Cost Model | Per invocation (Unpredictable) | Flat monthly rate (Predictable) |
| Data Location | Usually Frankfurt/Dublin | Norway (GDPR Compliant) |
| Cold Start | Variable (100ms - 2s) | Tunable (Keep containers warm) |
| Execution Time Limit | Usually 5-15 mins | Unlimited |
Security and Compliance in 2019
With GDPR in full swing, where your data lives is a legal liability. The "Cloud Act" in the US has made many European CTOs nervous about hosting sensitive customer data on US-owned platforms, even if the datacenter is in Europe.
By utilizing a Norwegian VPS provider, you add a layer of sovereignty. You know exactly where the physical drive sits. You can harden the firewall using `iptables` or `ufw` to only accept traffic from your load balancers.
# Hardening the worker node
$ ufw default deny incoming
$ ufw allow 22/tcp
$ ufw allow 2377/tcp # Docker Swarm cluster management
$ ufw allow 7946/tcp # Container network discovery
$ ufw enable
Conclusion
Serverless is a powerful architectural pattern, but it shouldn't cost you your budget or your control. By combining the orchestration capabilities of OpenFaaS with the raw, low-latency power of CoolVDS NVMe instances, you build a platform that is robust, legally safer, and incredibly fast.
Don't let slow I/O kill your application's performance. Spin up a KVM instance in Oslo today and take back control of your infrastructure.
Deploy your private FaaS cluster on CoolVDS in under 55 seconds.