Console Login

Serverless Without the Lock-in: Implementing FaaS Patterns on High-Performance VDS in 2018

Serverless is a Behavior, Not a Provider

It is late 2018, and if I hear one more developer pitch "Serverless" as a magic bullet that solves all infrastructure woes, I might just `rm -rf /` my own workstation. The hype cycle around AWS Lambda and Azure Functions is deafening. They promise you never have to manage a server again. They lie.

Serverless is an architecture pattern, not a credit card transaction.

When you rely entirely on public cloud FaaS (Functions as a Service), you are trading management time for latency—specifically, the dreaded "Cold Start." I recently audited a client's "highly scalable" API processing image uploads. They were seeing 2-second delays because their Lambda containers were spinning down. For a user in Oslo connecting to a datacenter in Frankfurt, that latency is unacceptable.

Furthermore, with GDPR having come into full force this past May, data sovereignty is no longer optional. Sending Norwegian user data to US-controlled buckets requires legal gymnastics that most CTOs want to avoid. The solution? Build the Serverless pattern yourself, on infrastructure you control, right here in Norway.

The Architecture: OpenFaaS on Docker Swarm

You do not need the complexity of Kubernetes (unless you enjoy managing etcd clusters at 3 AM). For most teams in 2018, Docker Swarm combined with OpenFaaS is the sweet spot. It gives you the event-driven behavior of Lambda with the raw I/O performance of a dedicated VPS.

Here is the architecture we are deploying:

  • Compute: CoolVDS Instances (KVM-based, not OpenVZ container trash).
  • Orchestrator: Docker Swarm (Native clustering).
  • Framework: OpenFaaS (Serverless framework for Docker).
  • Gateway: NGINX (as a reverse proxy).

Step 1: The Foundation

Serverless relies on rapidly spinning up and tearing down containers. This kills mechanical hard drives. If your VPS provider is not giving you NVMe storage, you are dead in the water before you start. Disk I/O is the bottleneck of serverless.

First, initialize your Swarm on your primary node. We assume you are running Ubuntu 18.04 LTS.

# Initialize Swarm on the manager node
docker swarm init --advertise-addr $(hostname -i)

# You will get a join token like this:
# docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 192.168.99.100:2377

Join your worker nodes using the token. Ideally, keep your latency between nodes under 1ms. If you are using CoolVDS, the internal network typically handles this without jitter.

Step 2: Deploying the FaaS Stack

We will use the OpenFaaS stack. Clone the repository and deploy the stack using Docker Compose syntax (compatible with Swarm).

git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh

Wait. What just happened? You just deployed a Gateway, Prometheus (for metrics), AlertManager, and the NATS streaming queue. This is a production-ready event loop.

Pro Tip: By default, Docker logs can fill up your disk space rapidly in a FaaS environment. Configure your daemon.json to rotate logs before you deploy.

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Step 3: The Cold Start Killer

The problem with AWS Lambda is that you cannot control when the container dies. With your own infrastructure, you can set the `scale_from_zero` policy. But for critical paths, you keep one replica warm. It costs you RAM, but RAM is cheap on CoolVDS compared to lost customers.

Let's write a simple Python 3 function to hash data (a CPU bound task). We use the `faas-cli`.

# Create the function structure
faas-cli new --lang python3 hasher

Edit `hasher/handler.py`:

import hashlib

def handle(req):
    """handle a request to the function
    Args:
        req (str): request body
    """
    if not req:
        return "Empty body"
    
    m = hashlib.sha256()
    m.update(req.encode('utf-8'))
    return m.hexdigest()

Now, the critical part. The configuration. In your `stack.yml`, we define the constraints.

provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  hasher:
    lang: python3
    handler: ./hasher
    image: hasher:latest
    environment:
      read_timeout: 10
      write_timeout: 10
    # This is where we beat the cloud:
    labels:
      com.openfaas.scale.min: "1"  # Always keep one warm
      com.openfaas.scale.max: "20"

The Network Layer: Latency is the Enemy

If your users are in Norway, routing traffic through a US load balancer adds 100ms+ round trip time (RTT). By hosting this stack on a VPS in Oslo (or nearby), you drop that RTT to <15ms.

However, your internal network setup matters. You need to tune NGINX to handle the ephemeral nature of these connections. The default NGINX config is too polite. We need it to be aggressive.

Here is a snippet for your `nginx.conf` designed for high-concurrency API gateways:

worker_processes auto;
events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

http {
    # Disable Nagle's algorithm for instant packet sending
    tcp_nodelay on;
    tcp_nopush on;
    
    # Keep connections alive to upstream containers
    keepalive_timeout 30;
    keepalive_requests 100000;

    upstream gateway {
        server 127.0.0.1:8080;
        keepalive 64;
    }
}

Performance Comparison: Managed vs. Self-Hosted

Why go through this trouble? Control and Cost.

Feature Public Cloud FaaS CoolVDS + OpenFaaS
Cold Start Latency 300ms - 2000ms < 10ms (with warm replicas)
Execution Time Limit Typically 5-15 mins Unlimited
Data Sovereignty Murky (Cloud Act vs GDPR) 100% Norway/EU
Storage I/O Network Attached (Slow) Local NVMe (Fast)

Why Bare Metal Performance Matters in Virtualization

Many providers oversell their CPU cores. In a serverless architecture, a "steal time" of 5% can cascade into a 50% performance degradation across your swarm. Because functions are short-lived, any CPU wait time is amplified.

We built CoolVDS on KVM because it offers strict isolation. When you execute a Docker build or a heavy Python hash function, you are hitting the physical cores, not fighting 500 other neighbors for a slice of the processor. For databases that back these functions (like MongoDB or Postgres), the difference is night and day.

The Verdict

Serverless is powerful. But "Serverless" on someone else's computer is a trap of variable latency and vendor lock-in. By deploying OpenFaaS on high-performance infrastructure, you own the stack. You comply with Datatilsynet requirements. You keep your latency low for Nordic users.

Don't let your architecture be dictated by a billing model. Build what works.

Ready to build a swarm that doesn't sleep? Deploy a CoolVDS NVMe instance in Oslo today and get your FaaS gateway running in under 5 minutes.