Serverless Patterns in 2018: Escaping Vendor Lock-in with Self-Hosted FaaS
Let’s clear the air immediately: "Serverless" is a lie. There are always servers. The only variable is whether you control them, or if you're renting execution time by the millisecond from a giant US conglomerate that holds your source code hostage.
I’ve spent the last six months migrating a client off a major public cloud provider. Why? Because their "cheap" lambda functions started costing more than a rack of dedicated hardware once their traffic hit 500 requests per second. Add to that the nightmare of Cold Starts—where a function takes 3 seconds to wake up because the provider spun down the container—and the latency becomes unacceptable for real-time applications.
With GDPR officially enforceable as of last month (May 2018), the rules have changed. Sending user data to a black-box function running in us-east-1 isn't just bad architecture; it's a compliance liability. For Norwegian businesses, data sovereignty is no longer optional.
The Alternative: Self-Hosted FaaS
The pragmatic move for 2018 is Functions as a Service (FaaS) on top of your own infrastructure. You get the developer velocity of serverless (git push -> deploy) without the vendor lock-in or latency penalties. You control the hardware, the network, and the location.
We are going to look at OpenFaaS. It’s container-native, runs brilliantly on Kubernetes, and lets you define functions in Docker containers. But to make this work, you cannot use noisy-neighbor shared hosting. You need KVM virtualization with strict resource isolation.
The Architecture: K8s + OpenFaaS on CoolVDS
In a recent deployment for a media processing startup in Oslo, we utilized a cluster of CoolVDS instances running Kubernetes 1.10. The goal was to process image uploads. Public cloud functions were timing out on large images. Here is the stack we built:
- Infrastructure: 3x CoolVDS NVMe instances (4 vCPU, 8GB RAM each) connected via private networking.
- Orchestration: Kubernetes (kubeadm).
- FaaS Framework: OpenFaaS.
- Ingress: Nginx with aggressive caching tuning.
Why NVMe? Because FaaS relies on spinning up containers instantly. Disk I/O is the bottleneck for container start times. On standard SATA SSDs, we saw start times of 1.2s. On CoolVDS NVMe drives, that dropped to 300ms.
Step 1: The Foundation
Assuming you have provisioned your KVM instances, the first step is kernel tuning. Standard Linux distributions are not tuned for the high churn of container networking.
Add these to your /etc/sysctl.conf to handle the burst traffic typical of serverless workloads:
# Allow more connections
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
# Port range expansion for high-concurrency external requests
net.ipv4.ip_local_port_range = 1024 65000
# Fast recycling of TIME_WAIT sockets (essential for short-lived functions)
net.ipv4.tcp_tw_reuse = 1
Apply with sysctl -p.
Step 2: Deploying OpenFaaS
We use helm to deploy OpenFaaS. It’s cleaner than applying raw manifests. Ensure you have RBAC enabled on your cluster.
# Create namespaces
kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
# Add the OpenFaaS helm repo
helm repo add openfaas https://openfaas.github.io/faas-netes/
# Deploy
helm upgrade openfaas --install openfaas/openfaas \
--namespace openfaas \
--set functionNamespace=openfaas-fn \
--set serviceType=NodePort
Once deployed, your gateway is exposed. This is where the magic happens. You push code, OpenFaaS packages it into a Docker container, and Kubernetes schedules it.
Pro Tip: By default, OpenFaaS scales to zero. This saves resources but causes cold starts. On your CoolVDS instances, since you own the resources, setcom.openfaas.scale.minto1in your function labels. This keeps one "warm" container ready to accept requests instantly. Latency drops from seconds to milliseconds.
Step 3: The Function Definition
Here is a real-world example of an image resizing function configuration (stack.yml). Note the environment variables used to tune the underlying process.
provider:
name: faas
gateway: http://127.0.0.1:31112
functions:
image-resizer:
lang: python3
handler: ./image-resizer
image: my-registry/image-resizer:0.1.2
labels:
com.openfaas.scale.min: "1"
com.openfaas.scale.max: "15"
com.openfaas.scale.factor: "20"
environment:
write_timeout: "15s"
read_timeout: "15s"
combine_output: false
limits:
memory: 128Mi
cpu: 100m # 10% of a vCPU
The cpu: 100m limit is critical. On a shared VPS, "1 vCPU" is often a burstable metric. If your noisy neighbor is compiling a kernel, your function hangs. With CoolVDS, 1 vCPU is a dedicated thread on the hypervisor. 100m is genuinely 10% of a physical core.
Handling State: The "Serverless" Achilles Heel
Functions are stateless. But your app isn't. You need a database. Do not run your database inside Kubernetes/FaaS if you value your sanity. Run it on a dedicated node.
For this architecture, we peer a separate CoolVDS instance running MariaDB 10.2. To ensure the database can keep up with hundreds of concurrent functions, we tune the InnoDB engine specifically for flash storage (NVMe):
[mysqld]
# Optimize for NVMe IOPS
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_neighbors = 0
# Memory allocation (Assuming 8GB RAM node)
innodb_buffer_pool_size = 6G
innodb_log_file_size = 512M
Setting innodb_flush_neighbors = 0 tells MySQL, "I trust my storage device to handle random writes; don't try to be clever and group them." This is only safe on high-end SSD/NVMe storage.
Latency and Sovereignty
Why bother with all this configuration?
- Latency: If your users are in Oslo or Bergen, routing traffic to Frankfurt or Ireland adds 30-50ms round trip time. Hosting on CoolVDS in Norway keeps that local latency under 5ms via NIX.
- Control: You can SSH into the node. You can run
tcpdump. Try doing that on AWS Lambda when your API is returning 502s. - Cost: A fixed monthly price for a cluster of KVM instances is predictable. An auto-scaling cloud bill is not.
Serverless is a powerful pattern, but it shouldn't mean surrendering your infrastructure. By building your own FaaS platform on robust KVM virtualization, you get the best of both worlds: the developer experience of 2018's modern tools and the raw iron performance of traditional hosting.
Ready to build your private FaaS cluster? Deploy a high-performance KVM instance on CoolVDS today and keep your data within Norwegian borders.