Console Login

Serverless Without the Lock-in: Building High-Performance FaaS Architectures in Norway

Serverless Without the Lock-in: Building High-Performance FaaS Architectures in Norway

Let's cut through the marketing noise for a second. Everyone is talking about "Serverless" this year. If you listen to the giants in Seattle or Redmond, they’ll tell you that managing servers is a relic of the past and you should upload your logic to their black boxes immediately.

But as a systems architect who has actually debugged production outages at 3 AM, I know better. "Serverless" is just a lie we tell stakeholders. There are still servers. The difference is, you don't control them, you can't tune them, and when the "Noisy Neighbor" effect hits your Lambda function in a massive availability zone, you have zero recourse.

And then there is the elephant in the room: GDPR. As of May 25th, the rules changed. Relying on US-controlled cloud providers to process Norwegian user data adds a layer of legal complexity that keeps CTOs awake at night. The solution isn't to abandon the Function-as-a-Service (FaaS) pattern—it's brilliant for event-driven architectures. The solution is to own the stack.

The Architecture: Self-Hosted FaaS

In 2018, we have mature tools to run serverless workloads on our own infrastructure. By deploying an open-source FaaS framework on a high-performance VPS, we gain three massive advantages:

  1. Zero Cold Starts: We control the container keep-alive settings.
  2. Data Sovereignty: Data never leaves the CoolVDS datacenter in Norway.
  3. Cost Predictability: No surprise bills for API gateway invocations.

We are going to use OpenFaaS. It's container-native, runs on Docker Swarm (or Kubernetes if you enjoy pain), and is incredibly fast.

Step 1: The Foundation

Serverless relies entirely on container spin-up speed. If your underlying disk I/O is slow, your functions will lag. This is why I refuse to deploy FaaS on standard SSDs, let alone spinning rust. We need NVMe. On a CoolVDS NVMe instance, the I/O wait is practically negligible.

First, let's prep a clean CentOS 7 or Ubuntu 16.04 instance. We need Docker CE (Community Edition) installed.

# Remove old versions if present
sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine

# Install utilities
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

# Add the repo
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# Install Docker CE (targeting stable 18.06)
sudo yum install -y docker-ce
sudo systemctl start docker
sudo systemctl enable docker

Step 2: Swarm Initialization

While Kubernetes is winning the orchestration war, for a pure FaaS deployment in 2018, Docker Swarm is still lighter and faster to configure for small-to-medium clusters.

# Initialize Swarm on the primary node
docker swarm init --advertise-addr $(hostname -i)

Pro Tip: If you are running this on a private network within CoolVDS, ensure your advertise address is your private LAN IP to avoid latency from external routing.

Step 3: Deploying OpenFaaS

We will clone the OpenFaaS repository and deploy the stack. This sets up the API Gateway, the watchdog, and Prometheus for metrics.

git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh

Once deployed, you need the CLI tool to interact with your new serverless cluster.

curl -sL https://cli.openfaas.com | sudo sh

Optimizing for Latency: The Norway Factor

Here is where the geography matters. If your users are in Oslo, Bergen, or Trondheim, routing traffic to a datacenter in Frankfurt adds 20-30ms of round-trip time (RTT). For a static site, that's fine. For a chain of microservices triggering each other?

It piles up.

If Function A calls Function B which queries Database C, that 30ms latency compounds. By hosting on a VPS in Norway, your RTT drops to 2-5ms.

Configuring Nginx as a Reverse Proxy

You shouldn't expose the OpenFaaS gateway (port 8080) directly to the world. Put Nginx in front of it to handle SSL termination and caching. This also gives you a place to enforce security headers.

server {
    listen 80;
    server_name faas.yourdomain.no;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # Performance Tuning for FaaS
        proxy_buffering off;
        proxy_read_timeout 300s;
    }
}
Architect's Note: The proxy_buffering off; directive is crucial here. You want the function's output streamed back to the client immediately, especially for long-running processes or log streaming. Don't let Nginx hold the bytes.

The Storage Bottleneck

The most common mistake I see when teams move to self-hosted Docker environments is underestimating IOPS (Input/Output Operations Per Second). When you deploy a new function, the system has to pull the image, extract layers, and mount the overlay filesystem.

On a standard SATA SSD, this might take 2-4 seconds. On CoolVDS NVMe storage, we clock this consistently under 600ms. In the world of microservices, that difference is the gap between a snappy UI and a frustrated user.

Storage Type Random Read IOPS Container Start Latency (Est.)
7.2k HDD ~80-120 5s+ (Painful)
SATA SSD ~5,000-10,000 1.5s (Acceptable)
CoolVDS NVMe ~200,000+ <0.5s (Instant)

Compliance and the "Patriot Act" Fear

We cannot ignore the legal landscape. Since the Datatilsynet (Norwegian Data Protection Authority) ramped up enforcement, clients are asking hard questions about where their data physically resides. While Privacy Shield is technically currently in place, many legal experts in the EU are skeptical about its longevity regarding US surveillance laws.

By running your FaaS architecture on a Norwegian VPS, you bypass this headache entirely. You know exactly which physical drive your data sits on. You aren't sharing a kernel with a competitor, and you aren't routing traffic through a third-party API gateway that logs your payloads.

Final Thoughts

Serverless is a pattern, not a product. It allows developers to focus on code rather than plumbing. But abdicating control of your infrastructure to a hyperscaler comes with costs—both financial and technical.

Building your own FaaS cluster on CoolVDS gives you the best of both worlds: the developer velocity of serverless, with the raw power and compliance of dedicated Norwegian iron. Start small, measure your latencies, and never settle for slow I/O.

Ready to build your cluster? Spin up a High-Frequency NVMe instance in Oslo now and see the difference real hardware makes.