Serverless Without the Lock-in: Building High-Performance FaaS on Bare-Metal VPS
It is late 2018, and the industry is screaming "Serverless!" from every rooftop. The promise is seductive: no infrastructure to manage, infinite scaling, and pay-per-execution. But as any battle-hardened systems architect knows, the bill always comes due. When you commit fully to AWS Lambda or Azure Functions, you aren't just deploying code; you are signing a blood pact with vendor lock-in, unpredictable "cold start" latency, and a billing model that becomes extortionate at scale.
There is a better way. You can have the developer velocity of Function-as-a-Service (FaaS) without handing the keys to your kingdom to a US megacorp. By leveraging OpenFaaS on top of high-performance KVM virtualization, you gain total control over costs, latency, and data sovereignty.
This is not a theoretical exercise. In Norway, where data privacy (GDPR) and latency to the NIX (Norwegian Internet Exchange) are paramount, running your own FaaS on local infrastructure is often the only legally and technically sound choice.
The "Cold Start" Problem and the NVMe Fix
The dirty secret of public cloud serverless is the cold start. If your function hasn't run in a few minutes, the provider spins down the container. The next request waits for the environment to boot. In a high-frequency trading environment or a real-time e-commerce checkout, 500ms of latency is unacceptable.
When you run your own FaaS infrastructure on CoolVDS, you control the "keep-alive" duration. More importantly, the underlying storage makes or breaks container hydration speeds. We use pure NVMe storage in our Oslo data center. Comparing spinning rust (or even standard SSDs) to NVMe for Docker image extraction is night and day.
Pro Tip: Docker heavy lifting depends on I/O. If your VPS provider is throttling your IOPS (Input/Output Operations Per Second), your functions will hang during scale-out events. Always checkiostat -x 1during load testing. If%utilhits 100% while wait times spike, your storage is the bottleneck.
Architecture: OpenFaaS on Docker Swarm
While Kubernetes is rapidly becoming the standard (version 1.13 just dropped), for many teams, Docker Swarm remains the pragmatic choice in 2018 for its simplicity and lower overhead. We will deploy OpenFaaS on a cluster of CoolVDS instances running Ubuntu 18.04 LTS.
Step 1: The Foundation
First, we prepare the host. We need a clean Ubuntu 18.04 install. Avoid "burstable" instances for the manager node; CPU stealing will cause timeouts during orchestration logic.
# Update and install dependencies
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install -y curl git apt-transport-https ca-certificates software-properties-common
# Install Docker CE (Official 2018 method)
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce
Step 2: Initialize the Swarm
On your primary CoolVDS node (the Manager), initialize the Swarm. Note the private IP usage—essential for security within our private networking options.
# Initialize Swarm on the Manager Node
sudo docker swarm init --advertise-addr 10.0.0.5
# You will see an output command to join workers.
# Run that command on your secondary CoolVDS instances.
Step 3: Deploy OpenFaaS
Now we deploy the OpenFaaS stack. This includes the API Gateway, the Watchdog, and Prometheus for metrics. The beauty of OpenFaaS is its agnosticism; it doesn't care if you are on AWS or a rack in Oslo.
# Clone the OpenFaaS repository
git clone https://github.com/openfaas/faas
cd faas
# Deploy the stack using the built-in shell script
./deploy_stack.sh
Once deployed, verify the services are running. You should see the gateway, NATS streaming, and Prometheus alerting services.
sudo docker service ls
# ID NAME MODE REPLICAS IMAGE
# m2x1... func_gateway replicated 1/1 openfaas/gateway:0.9.14
# ...
Writing a Function: The Python Example
Let's create a function that performs image resizing—a classic serverless use case that requires decent CPU cycles. We will use the OpenFaaS CLI.
# Install CLI
curl -sL https://cli.openfaas.com | sudo sh
# Create a new function skeleton
faas-cli new --lang python3 image-resizer
Edit `image-resizer/handler.py`. Note that in 2018, Python 3.6 is the standard for these environments.
import sys
from PIL import Image
import io
def handle(req):
"""
req: raw image bytes string
"""
try:
# Convert string bytes to stream
input_stream = io.BytesIO(req.encode('utf-8'))
image = Image.open(input_stream)
# Resize logic
image.thumbnail((128, 128))
# Save to buffer
output_stream = io.BytesIO()
image.save(output_stream, format=image.format)
return output_stream.getvalue()
except Exception as e:
return str(e)
Performance Tuning: Nginx and Sysctl
The default Docker networking stack can be a bottleneck under heavy load. If you are serving thousands of requests per second (which a CoolVDS NVMe instance can easily handle), you need to tune the kernel.
Edit /etc/sysctl.conf to allow more open files and faster TCP recycling:
# /etc/sysctl.conf
fs.file-max = 2097152
net.core.somaxconn = 65535
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_fin_timeout = 15
Apply these changes with sudo sysctl -p. Without this, your high-throughput serverless cluster will hit TIME_WAIT exhaustion before the CPU even breaks a sweat.
Data Sovereignty and GDPR
Since May 2018, GDPR has changed the landscape. Using US-based cloud providers (AWS, Google, Azure) creates complexity regarding data transfer agreements. Even with Privacy Shield (which is under constant legal attack), keeping personal data of Norwegian citizens within the borders is the safest legal strategy.
| Feature | Public Cloud FaaS | Self-Hosted (CoolVDS) |
|---|---|---|
| Data Location | Opaque (EU Region != Specific Country) | Oslo, Norway (Guaranteed) |
| Execution Time Limit | Strict (usually 5-15 mins) | Unlimited |
| Hardware Access | Abstracted vCPU | Kernel Access / KVM |
| Billing | Per-request (Unpredictable) | Flat Monthly Rate |
Why CoolVDS?
We don't offer "serverless" as a managed product because we believe you should own the stack. We offer the infrastructure that makes serverless possible. Our KVM virtualization ensures that your Docker containers are completely isolated from other tenants. We don't oversubscribe RAM, and our storage backend is exclusively NVMe.
When you deploy OpenFaaS on CoolVDS, you aren't just saving money compared to Lambda. You are building a portable, high-performance asset that belongs to your company, not a cloud vendor. Whether you are processing sensor data from the North Sea or handling traffic for a disruptive Oslo fintech, the underlying metal matters.
Ready to build? Don't let slow I/O kill your function performance. Deploy a high-frequency CoolVDS instance today and get your private serverless cluster running in under 5 minutes.