The "Serverless" Lie: Why You Should Own Your Functions
Let’s clear the air immediately: Serverless is just a marketing term for "someone else's servers." And usually, those servers belong to a massive US conglomerate that charges you a premium for abstraction while holding your data hostage under the US CLOUD Act. If you are operating here in Norway or anywhere in the EEA, the recent implementation of GDPR (May 2018) should have you sweating every time you pipe user data through a black-box function hosted in `us-east-1`.
I've spent the last decade debugging distributed systems, and the current hype cycle around FaaS (Functions as a Service) ignores two critical realities: Latency and Cost Predictability. Sure, Lambda is cheap when you have zero users. But when you hit scale, the billing curve goes vertical. Furthermore, the "cold start" problem—where your function sleeps and takes 2-3 seconds to wake up—is unacceptable for real-time interactions.
The solution isn't to abandon the architectural pattern. Event-driven code is brilliant. The solution is to host it yourself. Today, I'm going to show you how to build a production-ready FaaS platform using OpenFaaS on a battle-tested CoolVDS instance. We get the developer velocity of serverless with the raw I/O performance of local NVMe storage.
The Architecture: OpenFaaS on Docker Swarm
While Kubernetes is eating the world (we are currently looking at v1.11), for a lean, mean FaaS implementation, Docker Swarm is still incredibly efficient and easier to manage for small-to-medium teams. OpenFaaS sits on top of this, routing traffic to Docker containers that spin up and down on demand.
Why run this on a VPS instead of bare metal? Isolation and snapshotting. If you mess up your Swarm config, you want to roll back the entire machine state in seconds. CoolVDS uses KVM virtualization, which means we get near-native CPU performance without the "noisy neighbor" issues inherent in container-based hosting (LXC/OpenVZ).
Step 1: The Environment Preparation
We are assuming a clean install of Ubuntu 18.04 LTS (Bionic Beaver). Don't use non-LTS releases for infrastructure; you want stability.
First, we need to ensure our I/O scheduler is optimized for the NVMe drives provided by CoolVDS. Standard spinning rust optimizations will actually slow us down here. Check your scheduler:
cat /sys/block/vda/queue/scheduler
# Output should be [none] or [mq-deadline] for NVMe virtualization
Next, install Docker CE. Do not use the old `docker.io` package from the apt repo; it's ancient. Use the official Docker repo.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install -y docker-ce
Step 2: Initialize the Swarm
Turn your VPS into a Swarm manager. This is where having a static IP from a provider with robust routing (like the NIX connection CoolVDS utilizes) is vital. Dynamic IPs will break your cluster quorum.
# Replace with your CoolVDS public IP
export PUBLIC_IP="192.0.2.10"
docker swarm init --advertise-addr $PUBLIC_IP
Step 3: Deploying OpenFaaS
We will use the `faas-netes` (or strictly `faas-swarm` in this context) stack. Clone the official repository. We are sticking to the 2018 stable releases.
git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh
This script deploys the Gateway, Prometheus (for metrics), and the AlertManager. Prometheus is critical here—it watches the traffic usage and tells the system when to auto-scale your functions.
Pro Tip: By default, Docker logs can fill up your disk space rapidly if you have a verbose function. Configure your Docker daemon to rotate logs. Create/etc/docker/daemon.json:Restart docker after applying this.{ "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } }
Writing a Python 3 Function
Let's create a function that actually does something useful, like image resizing—a task that is notoriously slow on cloud providers due to network I/O throttling. On a VPS with NVMe, the read/write speeds are blazing fast.
First, install the CLI:
curl -sL https://cli.openfaas.com | sudo sh
Now, scaffold a Python function:
faas-cli new --lang python3 image-resizer
This creates a directory structure. Edit `image-resizer/handler.py`:
def handle(req):
# Simulate processing
return "Image processed successfully on a localized instance."
And deploy it to your local stack:
faas-cli build -f image-resizer.yml
faas-cli deploy -f image-resizer.yml
The Latency Argument: Oslo vs. Frankfurt
Physical distance matters. If your user base is in Norway, routing traffic to AWS Frankfurt (eu-central-1) adds roughly 20-30ms of round-trip time (RTT). That sounds negligible, but in a microservices architecture where one request triggers five internal function calls, that latency compounds.
By hosting on CoolVDS in a local datacenter, you slash that RTT to single digits (often <5ms within Oslo). For e-commerce or financial trading applications, this is the difference between a conversion and a bounce.
GDPR and Data Sovereignty
With Datatilsynet (The Norwegian Data Protection Authority) ramping up enforcement this year, knowing exactly where your data sits is not just a technical detail—it's a legal requirement. When you use public cloud FaaS, you are often agreeing to vague terms regarding data replication.
Running your own OpenFaaS stack on a Norwegian VPS ensures that:
- Data never leaves the jurisdiction without your explicit command.
- You have root access to the underlying storage (encrypted via LUKS if you're smart).
- You are not subject to the whims of US foreign intelligence surveillance (FISA courts) to the same degree as using US-owned infrastructure.
Performance Tuning for High Load
If you expect high concurrency, standard Linux kernel settings are too conservative. You need to allow more open files and optimize the TCP stack in `sysctl.conf`.
# /etc/sysctl.conf optimizations for high-throughput FaaS
# Increase system file descriptor limit
fs.file-max = 2097152
# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 4096 87380 6291456
net.ipv4.tcp_wmem = 4096 16384 4194304
# Enable TCP Fast Open (TFO) for lower latency
net.ipv4.tcp_fastopen = 3
Apply these with `sysctl -p`. These settings allow the Docker swarm to handle thousands of simultaneous function invocations without choking on file descriptors.
Conclusion
Serverless is a powerful paradigm, but it shouldn't cost you your autonomy. By combining the orchestration of Docker and OpenFaaS with the raw power and sovereignty of CoolVDS infrastructure, you build a system that is legally compliant, cost-effective, and brutally fast.
Don't let your architecture be dictated by a credit card form. Take control of your stack.
Ready to build? Deploy a high-performance NVMe instance on CoolVDS today and get your private FaaS cluster running in under 5 minutes.