Console Login

The "No-Cloud" Serverless Pattern: Building High-Performance FaaS Architectures on VPS in Norway

The "No-Cloud" Serverless Pattern: Building High-Performance FaaS Architectures on VPS in Norway

Let’s get one thing straight immediately: Serverless is a lie. It is a marketing term designed to abstract you away from the hardware you pay for, while introducing the "Black Box" problem. When your AWS Lambda function hangs for 3 seconds on a cold start, you can't SSH in to check the load average. You just pay the bill.

For those of us managing high-throughput systems in the Nordics, reliance on public cloud FaaS (Function as a Service) introduces two critical failures: latency and sovereignty. If your users are in Oslo, routing every API trigger to Frankfurt (eu-central-1) or Ireland adds unnecessary milliseconds. In the world of high-frequency trading or real-time bidding, those milliseconds are expensive.

The superior architectural pattern in 2019 isn't abandoning servers—it's abstracting them correctly on your own infrastructure. By deploying a self-hosted FaaS layer on high-performance NVMe VPS instances, we gain the developer velocity of serverless with the raw I/O performance of bare metal, all while keeping data strictly within Norwegian borders.

The Architecture: The "Iron Functions" Pattern

We are seeing a shift away from the "pure" public cloud serverless model toward the "Iron Functions" pattern. This involves running a container orchestrator (Kubernetes or Docker Swarm) on a Virtual Dedicated Server (VDS), with a FaaS framework layered on top.

Why do this? Predictability.

Public cloud FaaS creates billing spikes. A DDoS attack on a public endpoint can bankrupt a startup overnight. On a CoolVDS instance, your cost is capped. If you hit 100% CPU, you throttle; you don't go broke. Furthermore, by utilizing KVM virtualization, we avoid the "noisy neighbor" issues plaguing shared container platforms.

Pro Tip: When running FaaS on a VPS, the bottleneck is rarely CPU—it's I/O wait. The constant pulling of Docker images and the overlayfs operations require fast disk access. Always choose NVMe storage over standard SSD for FaaS workloads. Standard SSDs will choke under the concurrency of 500+ function invocations.

Implementation: Deploying OpenFaaS on Ubuntu 18.04

For this architecture, we will use OpenFaaS. It is currently the most mature, Docker-native serverless framework available. We will deploy this on a CoolVDS instance running Ubuntu 18.04 LTS.

1. Kernel Tuning for High Concurrency

Serverless workloads generate thousands of short-lived TCP connections. The default Linux networking stack is conservative. Before installing Docker, we must tune the kernel to allow for rapid connection recycling, or you will run out of file descriptors.

Edit /etc/sysctl.conf and add the following parameters used in high-load production environments:

# /etc/sysctl.conf
# Increase system file descriptor limit
fs.file-max = 2097152

# Allow for more PIDs (essential for high container density)
kernel.pid_max = 4194303

# Tune network stack for short-lived connections
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 4096

Apply these changes with sysctl -p. If you skip this, your API Gateway will start throwing 502 errors under load, regardless of how much RAM you have.

2. The Container Layer

We will use Docker Swarm for this example as it has lower overhead than Kubernetes (v1.13) for a single or dual-node cluster, which fits the "Pragmatic Ops" methodology perfectly.

# Install Docker CE (Standard 2019 procedure)
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io

# Initialize Swarm
docker swarm init

3. Deploying the FaaS Stack

Clone the OpenFaaS repository. We are looking specifically for the synchronous architecture pattern where NATS is used for queueing but the Gateway handles immediate responses.

git clone https://github.com/openfaas/faas
cd faas && ./deploy_stack.sh

Once deployed, your VDS is now a function processor. The `gateway` service is your entry point. Unlike AWS API Gateway, you control the timeout values. If you have a long-running report generation task that takes 45 seconds, you simply adjust the `read_timeout` in the stack configuration.

The Data Sovereignty Factor: Norway vs. The World

This is where the architecture becomes a legal asset. With the implementation of GDPR last year (2018), and the strict oversight of the Norwegian Datatilsynet, knowing exactly where your data is processed is non-negotiable.

When you use a US-based cloud provider's serverless offering, you are often subject to opaque replication policies. By hosting your FaaS infrastructure on a Norwegian VPS, you ensure that the temporary data created during function execution—which often includes PII (Personally Identifiable Information)—never leaves the jurisdiction. It resides on physical drives in Oslo.

Feature Public Cloud FaaS Self-Hosted (CoolVDS)
Cold Start Latency 200ms - 3000ms < 50ms (Tunable)
Execution Timeout Strict Limits (e.g., 5-15 min) Unlimited
Data Location Region-based (Replication opaque) Single DC (Oslo)
Cost Model Per Request (Unpredictable) Fixed Monthly (Predictable)

The "Strangler Pattern" for Legacy Migration

How do you move a monolithic PHP application hosted on a legacy server to this new architecture? You use the Strangler Pattern. You don't rewrite the whole app. You verify the specific, high-load endpoints—image resizing, PDF generation, or webhook processing—and move them to your OpenFaaS layer on CoolVDS.

Here is an Nginx configuration snippet to route specific traffic from your main web server to your new FaaS backend, utilizing the `proxy_pass` directive effectively:

upstream faas_gateway {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name api.yourdomain.no;

    location /function/ {
        proxy_pass http://faas_gateway;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Crucial for long-running FaaS functions
        proxy_read_timeout 300s;
        proxy_connect_timeout 300s;
    }
}

Why Infrastructure Choice Dictates Success

This architecture relies heavily on the underlying virtualization technology. We specifically use KVM (Kernel-based Virtual Machine) at CoolVDS because it provides strict resource isolation. Container-based virtualization (like OpenVZ) shares the host kernel. If you try to run Docker inside OpenVZ, you will hit module limitations and stability issues.

Furthermore, running a database alongside your FaaS stack requires low latency. Our benchmarks show that NVMe storage provides the random IOPS necessary to handle the state management of thousands of concurrent functions without the "I/O wait" spikes seen on standard SATA SSD VPS providers.

Final Thoughts

Serverless is not about getting rid of servers. It is about architectural discipline. It is about decoupling your logic into small, maintainable units. But you shouldn't have to sacrifice performance or data privacy to get there.

If you are building the next generation of Norwegian tech, build it on ground you control. Stop renting black boxes.

Ready to own your infrastructure? Deploy a KVM NVMe instance on CoolVDS today and experience single-digit latency to NIX.