Console Login

Serverless Without the Vendor Chains: Building Event-Driven Architectures on Norwegian VDS

Serverless Without the Vendor Chains: Building Event-Driven Architectures on Norwegian VDS

"Serverless" is the most brilliant marketing lie of the last decade. There are always servers. The only difference is whether you control them, or if you're renting them by the millisecond at a 400% markup while praying your cold starts don't timeout your API gateway. I've spent the last six months migrating a client's heavy image-processing workload off AWS Lambda. Why? Because when you hit scale, the "pay per use" model stops being cheap and starts looking like extortion. Furthermore, for Norwegian businesses, routing traffic through Frankfurt or Ireland introduces latency that local users notice.

In this deep dive, we are going to look at the Serverless 2.0 pattern: keeping the developer experience (FaaS) but owning the infrastructure. We will deploy OpenFaaS on a KVM-based VDS to get the best of both worlds—predictable costs, sub-millisecond I/O, and data that stays within Norway's borders.

The Latency & Compliance Trap

Before we touch the terminal, let's talk about physics and law. If your users are in Oslo, and your functions run in `eu-central-1` (Frankfurt), you are adding roughly 20-30ms of round-trip time just on light speed. Add the TLS handshake, the API Gateway overhead, and the "cold start" penalty of a public cloud function (which can range from 200ms to 2 seconds), and your snappy app feels sluggish.

Then there is Datatilsynet. While we are all GDPR compliant, the legal landscape regarding data transfer to US-owned cloud providers is getting murkier by the day. Hosting your event-driven architecture on a VPS in Norway simplifies your compliance posture significantly. Your data sits on a disk in Oslo, governed by Norwegian law.

The Architecture: OpenFaaS on Docker Swarm

We don't need the complexity of Kubernetes for a medium-sized deployment. Docker Swarm is robust, native, and in 2020, it is rock solid for this use case. We will use CoolVDS instances because we need guaranteed CPU cycles. In a shared hosting environment, "noisy neighbors" steal CPU time, which is fatal for function execution speed. We also need NVMe storage because pulling Docker images for every function update generates massive I/O load.

Step 1: The Foundation

Assume you have provisioned a CoolVDS instance running Ubuntu 18.04 LTS. First, we secure the host and initialize the Swarm. Security is not optional.

# Update and secure
apt-get update && apt-get upgrade -y
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 8080/tcp # Gateway port
ufw allow 2377/tcp # Swarm management
ufw enable

# Install Docker CE (ensure you are not using the old lxc-docker)
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

# Initialize Swarm
docker swarm init --advertise-addr $(hostname -i)

Step 2: Deploying the Serverless Framework

OpenFaaS (Functions as a Service) is the reference implementation for self-hosted serverless. It sits on top of Docker, manages the API gateway, and scales your functions based on Prometheus metrics.

# Install the CLI
curl -sL https://cli.openfaas.com | sudo sh

# Clone the deployment logic
git clone https://github.com/openfaas/faas
cd faas && ./deploy_stack.sh

# Login to the gateway
echo -n  | faas-cli login --username=admin --password-stdin
Pro Tip: When running on CoolVDS, tune your Docker daemon to use the `overlay2` storage driver. With NVMe backing, this makes container startup times nearly instantaneous, practically eliminating the "cold start" problem that plagues public clouds.

Optimizing Nginx for Function Gateways

The default Nginx configuration in most Docker images is too conservative for high-throughput FaaS. You will hit `504 Gateway Time-out` errors on long-running tasks. We need to mount a custom configuration.

Here is a battle-tested `nginx.conf` snippet for handling synchronous invocations that might take time (like PDF generation):

http {
    keepalive_timeout 65;
    keepalive_requests 10000;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;

    # Buffer size handling for large payloads
    client_body_buffer_size 128k;
    client_max_body_size 50M;

    upstream faas_gateway {
        server 127.0.0.1:8080;
        keepalive 16;
    }

    server {
        listen 80;
        location / {
            proxy_pass http://faas_gateway;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header X-Real-IP $remote_addr;
            
            # Critical for FaaS: extend timeouts
            proxy_read_timeout 300s;
            proxy_send_timeout 300s;
        }
    }
}

Real-World Comparison: Image Resizing

Let's look at the numbers. We ran a benchmark resizing 10,000 high-res JPEGs. We compared AWS Lambda (eu-central-1) against a single CoolVDS NVMe instance (Oslo) running OpenFaaS.

Metric Public Cloud FaaS CoolVDS + OpenFaaS
Cold Start ~350ms - 1200ms ~50ms
Execution Consistency Variable (Shared CPU) Stable (KVM Isolation)
Data Sovereignty Uncertain (US Cloud Act) 100% Norway
Cost (1M invocations) Variable ($$$) Fixed ($)

Defining a Function in YAML

The beauty of this architecture is that your developers don't change their workflow. They define functions in a YAML stack file, just like they would with CloudFormation or Serverless.com framework.

provider:
  name: openfaas
  gateway: http://127.0.0.1:8080

functions:
  node-info:
    lang: node12
    handler: ./node-info
    image: node-info:latest
    environment:
      write_debug: true
    # Resource limits are critical on VDS
    limits:
      memory: 128Mi
    requests:
      memory: 64Mi

Deploying this is a single command: `faas-cli up`. Because the CoolVDS network throughput is unmetered on internal interfaces, the image build and push process is incredibly fast if you run a local registry.

The Hardware Reality

Software patterns like Serverless are only as good as the hardware underneath. If you try to run this stack on a budget VPS with spinning rust (HDD) or oversold CPU, the Prometheus scaler will lag, and your queue will back up. OpenFaaS relies heavily on the Docker daemon's responsiveness.

This is why we specify KVM virtualization. Unlike OpenVZ/LXC, KVM provides a hardware abstraction layer that prevents other tenants on the host node from impacting your kernel's scheduler. When your function wakes up, the CPU cycles must be there immediately. Combined with NVMe storage, which offers up to 6x the IOPS of standard SSDs, you create an environment where "serverless" feels instantaneous.

Monitoring the Beast

You cannot manage what you do not measure. Since OpenFaaS comes with Prometheus, we can hook it into Grafana for a dashboard that would make any NOC jealous. Here is a query to track the invocation rate per second:

rate(gateway_function_invocation_total{code="200"}[1m])

If this rate drops while your `gateway_service_count` remains high, you know you have a bottleneck, likely I/O wait times. On CoolVDS, I rarely see I/O wait (iowait) exceed 0.1%, even under load.

Conclusion

Serverless is an architectural pattern, not a product you buy from a hyperscaler. By decoupling the pattern from the provider, you gain control over your costs, your latency, and your data compliance. For the Norwegian market, where quality and privacy are paramount, running your own FaaS cluster is not just a technical flex—it's a business advantage.

Don't let high latency and vendor lock-in dictate your architecture. Spin up a high-performance, KVM-based instance today. Deploy a test instance on CoolVDS in under 60 seconds and reclaim your infrastructure.