Console Login

Serverless Without the Lock-in: Deploying Private FaaS Architectures in Norway (2018 Edition)

Serverless Without the Lock-in: Deploying Private FaaS Architectures in Norway

Let's clear the air immediately: "Serverless" is a lie. There are always servers. The only difference is whether you control the iron, the kernel, and the bill—or if you simply hand your credit card to Jeff Bezos and hope for the best. In late 2018, the industry is obsessed with AWS Lambda and Azure Functions, screaming "NoOps" from the rooftops. But any battle-hardened systems architect who has actually run high-throughput workloads knows the ugly truth: Public Cloud FaaS (Functions as a Service) introduces unpredictable latency, nightmares with vendor lock-in, and murky GDPR compliance issues here in Europe.

I have seen production deployments in Oslo grind to a halt because a US-east region had a hiccup, or because the "cold start" time of a Java function exceeded the timeout of the calling microservice. If you are building for the Nordic market, relying solely on public cloud serverless is a risk. The smarter pattern emerging this year is Private FaaS—running your own serverless framework on top of high-performance, predictable infrastructure.

The Architecture: Private FaaS on Bare-Metal Performance

Why would you run Serverless on a VPS? Three reasons: Cost consistency, Data Sovereignty, and I/O Performance. When you deploy a function on AWS Lambda, you pay per invocation and duration. If you get DDoS'd or have a runaway loop, your bill explodes. With a fixed-cost NVMe VPS from a provider like CoolVDS, your costs are capped.

For this architecture, we are looking at OpenFaaS. It’s container-centric, runs on Docker Swarm or Kubernetes, and allows you to run any binary as a function. This is critical for legacy integration.

The Stack

  • Infrastructure: CoolVDS KVM Instance (4 vCPU, 8GB RAM, NVMe Storage).
  • OS: Ubuntu 18.04 LTS (Bionic Beaver).
  • Orchestrator: Docker Swarm (Simpler than K8s for small-to-medium clusters).
  • FaaS Framework: OpenFaaS.

Step 1: The Foundation & Kernel Tuning

Before we install Docker, we need to prep the kernel. Standard Linux distros are tuned for general usage, not for the high churn of containers creating and destroying virtual network interfaces. On a CoolVDS instance, you have full root access, so we tune the network stack to handle the bridge traffic.

Open /etc/sysctl.conf and verify these settings are present to handle high connection rates without running out of file descriptors or connection tracking table space:

# /etc/sysctl.conf tuning for high-density container environments
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
fs.inotify.max_user_watches = 524288
fs.file-max = 100000

Apply these changes immediately:

sysctl -p

If you skip the fs.inotify setting, your functions will fail silently when the system runs out of file watchers—a classic debugging nightmare I've spent too many nights fixing.

Step 2: Deploying the Serverless Framework

We will use Docker Swarm for this example because it is lightweight and production-ready in 2018. Initialize the swarm on your primary node:

docker swarm init --advertise-addr $(hostname -i)

Now, we clone the OpenFaaS repository and deploy the stack. This installs the Gateway, the Watchdog, and Prometheus for metrics. Using NVMe storage here is non-negotiable; the heavy I/O from pulling images and writing logs will choke a standard SATA SSD VPS.

git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh

Once deployed, verify the services are running. You should see the gateway, NATS streaming queue, and the alert manager:

docker service ls

Step 3: Creating a GDPR-Compliant Image Resizer

Let's solve a real problem. You have a Norwegian e-commerce client who needs to resize user uploads. You cannot send these images to a US-based cloud function due to strict interpretations of the Datatilsynet guidelines regarding personal data processing (especially if faces are visible). We will build a Node.js function that runs locally on our CoolVDS instance.

First, install the CLI:

curl -sL https://cli.openfaas.com | sudo sh

Now, scaffold a new function using the Node.js 8 template (the current LTS):

faas-cli new --lang node8 image-resizer

This creates a handler.js file. Here is the implementation using the sharp library, which is vastly faster than ImageMagick because it uses libvips.

"use strict"

const sharp = require('sharp');

module.exports = (context, callback) => {
    // Context is the input buffer
    if (!context) {
        return callback(undefined, { status: "No image provided" });
    }

    const image = sharp(context);
    
    image
        .resize(300, 300)
        .toBuffer()
        .then(data => {
            // Return the binary data encoded as base64 for transport
            callback(undefined, data.toString('base64'));
        })
        .catch(err => {
            callback(err, undefined);
        });
}

Your image-resizer.yml file defines how this function scales. This is where the magic happens. We can define labels that tell the OpenFaaS watchdog how to auto-scale based on Prometheus metrics.

provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  image-resizer:
    lang: node8
    handler: ./image-resizer
    image: my-docker-user/image-resizer:latest
    labels:
      com.openfaas.scale.factor: 20
      com.openfaas.scale.min: 1
      com.openfaas.scale.max: 15
    environment:
      write_debug: true
Pro Tip: The com.openfaas.scale.factor label determines at what point the system triggers a replica count increase. A value of 20 means that once the function hits 20 requests per second, it scales up. On slower spinning-disk VPS providers, you have to set this lower (around 5-10) to avoid timeouts. On CoolVDS NVMe instances, we can comfortably push this to 20-30 because the I/O wait is virtually zero.

Step 4: Building and Deploying

Build the container, push it to your registry (or a private local registry if you want total isolation), and deploy:

faas-cli up -f image-resizer.yml

You now have a scalable, serverless endpoint running on http://YOUR_IP:8080/function/image-resizer.

The Latency Argument: Norway vs. Frankfurt

Physical distance matters. Light travels fast, but network routing is slow. If your users are in Oslo or Bergen, a round trip to AWS Frankfurt (eu-central-1) or Dublin (eu-west-1) adds 30-50ms of pure network latency, not counting the processing time.

By hosting your FaaS architecture on a CoolVDS server located in a Norwegian datacenter or close proximity, you are cutting that latency down to <5ms via NIX (Norwegian Internet Exchange). For real-time applications or high-frequency trading bots, that difference is the entire game.

Cost Comparison (Monthly Estimate)

Metric Public Cloud FaaS (AWS/Azure) Private FaaS (CoolVDS)
Execution Cost $0.20 per 1M requests (Variable) $0 (Included in Flat Rate)
Compute Throttled by tier Dedicated KVM Cores
Data Egress Expensive ($0.09/GB) Generous / Unmetered
Cold Start 100ms - 2s (Unpredictable) <50ms (Tunable)

Security: The "Noisy Neighbor" Problem

In a public cloud FaaS environment, your code runs on shared hardware alongside thousands of other tenants. While hypervisor escapes are rare, side-channel attacks (like the recent Spectre/Meltdown vulnerabilities revealed earlier in 2018) are a real concern for sensitive data.

We mitigate this by using KVM Virtualization. Unlike OpenVZ or LXC containers often sold as "VPS" by budget providers, KVM provides a higher degree of kernel isolation. When you run Docker on top of KVM, you have two layers of defense.

Conclusion

Serverless is a powerful pattern, but it shouldn't cost you your autonomy. By 2018 standards, the tools are mature enough to roll your own. With OpenFaaS and Docker Swarm, you get the developer experience of AWS Lambda with the cost structure and performance profile of bare metal.

If you are serious about low latency in Norway and keeping the Datatilsynet happy, stop shipping your data to Frankfurt. Build your Private Cloud on infrastructure that respects your need for speed.

Ready to build? Deploy a high-performance NVMe KVM instance on CoolVDS today and get your private serverless cluster running in under 5 minutes.