Console Login

Surviving Schrems II: Building a Private Serverless Architecture on Norwegian Soil

Surviving Schrems II: Building a Private Serverless Architecture on Norwegian Soil

Let’s be honest: "Serverless" is a lie. There are always servers. The only difference is whether you control them or you rent time on them by the millisecond—at a premium markup. For years, the trade-off was simple: we paid AWS Lambda or Azure Functions a premium to avoid managing infrastructure. It was convenient.

Then July 2020 happened. The CJEU's Schrems II ruling invalidated the Privacy Shield framework. Suddenly, sending Norwegian user data to a US-owned cloud provider for processing became a legal minefield. If your FaaS (Function as a Service) architecture processes PII (Personally Identifiable Information) on US-controlled hardware, your compliance officer is likely having sleepless nights.

We are seeing a massive shift in architecture patterns among our clients in Oslo and Stavanger. The move isn't back to monoliths; it's toward Private Serverless. You keep the developer experience (DX) of deploying functions, but you run the underlying orchestration on sovereign infrastructure within Norway.

The Architecture: Kubernetes + OpenFaaS on CoolVDS

To replicate the AWS Lambda experience without the data sovereignty risks or the cold-start latency, we utilize a lightweight Kubernetes distribution paired with OpenFaaS. This setup requires serious I/O performance. Public cloud instances with "burstable" CPU credits will choke under the etcd load required for orchestration.

We recommend K3s (a certified lightweight Kubernetes distro) running on CoolVDS instances equipped with NVMe storage. Why NVMe? Because Kubernetes is chatty. The API server and etcd database require low latency disk writes. If your VPS uses standard SSDs or, god forbid, spinning rust, your control plane will lag.

Phase 1: The Foundation

Deploy a standard CoolVDS instance running Ubuntu 20.04 LTS. Ensure you have at least 2 vCPUs and 4GB RAM for a production-grade control plane. Access your terminal via SSH.

Pro Tip: Before installing Kubernetes, disable swap. The kubelet scheduler cannot handle swap memory effectively and will often fail to start. On CoolVDS images, we optimize for this, but it's good practice to verify.

First, we prepare the node constraints and install K3s. We pass flags to disable the Traefik ingress controller by default, as we will prefer Nginx or the OpenFaaS gateway directly for better throughput control.

# Disable swap strictly
sudo swapoff -a

# Install K3s without Traefik (we will configure ingress manually later)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --no-deploy traefik" sh -

# Verify the node is ready
sudo k3s kubectl get nodes

Wait about 30 seconds. You should see your CoolVDS node status as Ready. The latency from your local machine to the API server is critical here. Since our data centers are located directly in Norway, peering via NIX (Norwegian Internet Exchange) ensures your kubectl commands feel instantaneous compared to routing through Frankfurt or Dublin.

Phase 2: Deploying the Serverless Framework

With the orchestration layer active, we deploy OpenFaaS. It is the industry standard for container-native serverless. It allows you to package code in Docker containers but invoke them via HTTP calls, just like Lambda.

We will use arkade, a tool released recently by the OpenFaaS community, to manage the helm charts easily. It saves writing 500 lines of YAML manually.

# Install arkade
curl -sLS https://dl.get-arkade.dev | sudo sh

# Deploy OpenFaaS with basic auth enabled
arkade install openfaas \
  --load-balancer 
  --set gateway.replicas=2 \
  --set queueWorker.replicas=2

Note the replica counts. We set the gateway and queue workers to 2. This ensures high availability. If you were running this on a shared hosting plan or a noisy VPS neighbor, you would see CPU steal metrics spike during function invocation. CoolVDS utilizes strict KVM isolation, meaning your CPU cycles are yours. This consistency is mandatory for predictable FaaS execution times.

Phase 3: The "War Story" Optimization

In a recent project for a Norwegian logistics firm, we migrated a barcode processing system from Azure Functions to this exact stack on CoolVDS. We hit a wall: image processing functions were timing out.

The culprit wasn't CPU; it was the default timeout configurations in the gateway stack. Public clouds hide these settings. In a Private Serverless environment, you must tune them.

Here is the critical configuration required to handle long-running tasks (like ML inference or video transcoding) without the gateway killing the connection. Save this as timeouts.yaml:

version: stack.yml/1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080

functions:
  image-processor:
    lang: python3
    handler: ./image-processor
    image: registry.coolvds-client.no/image-processor:0.1.2
    environment:
      write_debug: true
      read_timeout: 65s
      write_timeout: 65s
      exec_timeout: 65s
    labels:
      com.openfaas.scale.min: 1
      com.openfaas.scale.max: 15
      com.openfaas.scale.factor: 20
    limits:
      memory: 256Mi
      cpu: 500m

Pay attention to com.openfaas.scale.min: 1. This eliminates the "Cold Start" problem entirely. By keeping one hot replica active (which costs you nothing extra on a VPS since you pay for the VM, not the invocation), the first request is processed instantly. This is a massive advantage over standard pay-per-use models where you pay a latency penalty for infrequent usage.

Performance Benchmarks: Local vs. Cloud

We ran a benchmark executing a prime number calculation (CPU heavy) and a file write (I/O heavy) comparing a standard cloud function in Northern Europe against a CoolVDS NVMe instance running OpenFaaS.

Metric Public Cloud FaaS CoolVDS + OpenFaaS
Cold Start Latency ~800ms 12ms (Hot)
Disk Write (1GB) Varies (Network Storage) 450ms (Local NVMe)
Data Sovereignty US Owned Norwegian Owned

Legal & Compliance: The Elephant in the Server Room

Beyond performance, the real driver here is Datatilsynet (The Norwegian Data Protection Authority). Post-Schrems II, you must know where the physical disk is spinning. With CoolVDS, we can point to the rack in Oslo. There is no murky legal framework allowing foreign intelligence agencies access to your data.

Furthermore, by utilizing Private Serverless, you own the encryption keys. You are not relying on a provider-managed key management system (KMS) that sits in the same jurisdiction as the potential requestor.

Final Thoughts

The era of blindly trusting "magic" serverless clouds is pausing. Smart CTOs are realizing that efficiency implies control. You don't need to manage bare metal to get performance, but you do need dedicated resources that respect your code and your jurisdiction.

Don't let data sovereignty become a blocker for your development team. Deploy a K3s cluster on a CoolVDS NVMe instance today. It takes less than 5 minutes to become compliant and performant.