Console Login

Private Serverless Architecture: Avoiding Vendor Lock-in with Self-Hosted FaaS in Norway

Private Serverless Architecture: Avoiding Vendor Lock-in with Self-Hosted FaaS in Norway

Let's dismantle the biggest lie in the cloud industry: "Serverless." It is a marketing term that implies infrastructure management has vanished. In reality, you are simply trading the headache of patching kernels for the migraine of opaque billing, cold starts, and arguably the most dangerous risk for European businesses in 2022—vendor lock-in. For a Norwegian CTO or a Lead Architect, relying entirely on US-based hyperscalers for event-driven functions isn't just a technical decision; it's a legal minefield post-Schrems II. If your data touches a runtime managed by a US entity, even if that server is physically in Frankfurt, you are navigating gray waters regarding GDPR and Datatilsynet compliance.

I have seen too many engineering teams in Oslo paralyzed by "Lambda shock"—that moment when a simple image resizing function scales up, and the invoice scales significantly faster than the revenue. Furthermore, the round-trip latency from Norway to central European availability zones adds perceptible lag. If your users are in Bergen or Trondheim, why is your logic executing in Ireland? The "Private Serverless" pattern is the pragmatic answer. By leveraging container orchestration on high-performance infrastructure like CoolVDS, you gain the developer velocity of FaaS (Function as a Service) with the predictability and compliance of owned infrastructure.

The Architecture: K3s, OpenFaaS, and NVMe

To replicate the serverless experience without the overhead, we don't need a massive OpenShift cluster. In 2022, the gold standard for edge and lightweight orchestration is K3s combined with OpenFaaS. This stack allows you to push code, not containers, while maintaining complete control over the networking stack. However, this architecture is demanding on I/O. When a function scales from zero to one hundred replicas, your storage system gets hammered by container image pulls and overlay filesystem operations. This is where the underlying hardware matters. Using standard HDD or even SATA SSD VPS providers results in "noisy neighbor" latency spikes. You need the NVMe storage throughput and KVM isolation found in CoolVDS instances to make this viable in production.

Step 1: Kernel Tuning for High Concurrency

Before we touch Kubernetes, we must prepare the Linux kernel. A default installation is tuned for general usage, not for the rapid creation and destruction of network namespaces required by FaaS. On your CoolVDS node (running Ubuntu 20.04 LTS or Debian 11), you need to adjust the sysctl parameters to handle increased ARP cache and open file limits.

# /etc/sysctl.d/99-serverless-tuning.conf

# Increase the limit of open files
fs.file-max = 2097152

# Optimize the neighbor table for high churn of pods
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 16384

# Allow more connections to be handled
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 16384

# Enable packet forwarding for container networking
net.ipv4.ip_forward = 1

Apply these changes with sysctl -p /etc/sysctl.d/99-serverless-tuning.conf. Skipping this step is the number one reason I see self-hosted clusters choke under load during penetration testing.

Step 2: The Orchestration Layer

We use K3s because it strips away the bloat of upstream Kubernetes, replacing etcd with a lighter datastore if needed, though for a production CoolVDS node, I recommend sticking to the embedded etcd or an external SQL datastore for HA. The installation is trivial, but ensure you disable the default Traefik ingress if you plan to use a custom Nginx ingress or OpenFaaS's own gateway capabilities extensively.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik" sh -

Step 3: Deploying the Function Runtime

With the cluster running, we deploy OpenFaaS. It provides the API gateway, the queue worker (NATS Streaming), and the metrics collector (Prometheus). We will use arkade, a tool that simplifies Kubernetes app installation, which has become a staple in the DevOps toolkit over the last two years.

# Install arkade
curl -sLS https://get.arkade.dev | sudo sh

# Install OpenFaaS on the K3s cluster
arkade install openfaas \
  --load-balancer 
  --set gateway.directFunctions=true \
  --set queueWorker.ackWait=60s
Pro Tip: The directFunctions=true flag is crucial for performance. It allows the gateway to bypass the queue for synchronous invocations, reducing latency significantly. This is only safe if your CoolVDS instance has the CPU headroom to handle bursts—another reason why guaranteed resources trump "burstable" clouds.

The Developer Experience: Writing the Handler

The beauty of this pattern is that your developers do not change their workflow. They write a handler, and the stack handles the Dockerization. Here is a typical Python handler for processing webhooks—a common use case for Norwegian e-commerce sites integrating with payment providers like Vipps.

# handler.py
import json
import os

def handle(req):
    """handle a request to the function
    Args:
        req (str): request body
    """
    
    try:
        payload = json.loads(req)
        # Simulate business logic, e.g., verifying a Vipps transaction
        transaction_id = payload.get("transaction_id")
        
        if not transaction_id:
            return {"status": "error", "message": "Missing transaction_id"}
            
        # In a real scenario, you'd log this to a persistent DB
        # hosted on a separate CoolVDS instance via private networking
        return {
            "status": "success", 
            "processed": transaction_id,
            "node": os.getenv("HOSTNAME")
        }
        
    except Exception as e:
        return {"status": "error", "message": str(e)}

The definition file (stack.yml) ties it together. Notice the memory limits. One of the primary advantages of owning the infrastructure is that you can set these limits based on your actual hardware capacity, not arbitrary tiered pricing models.

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  vipps-processor:
    lang: python3
    handler: ./vipps-processor
    image: registry.yourcompany.no/vipps-processor:latest
    limits:
      memory: 128Mi
    requests:
      memory: 64Mi

Securing the Gateway

You cannot expose the OpenFaaS gateway directly to the internet without a robust reverse proxy. While the default ingress works, an nginx layer provides easier management of SSL termination (Let's Encrypt) and rate limiting to protect against DDoS attacks. High-performance VPS hosting requires a defense-in-depth strategy.

# /etc/nginx/sites-available/gateway
server {
    listen 80;
    server_name functions.yourcompany.no;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # Buffer settings for large payloads
        client_max_body_size 50M;
        client_body_buffer_size 128k;
        
        # Timeouts for long-running synchronous functions
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

Why Local Infrastructure Wins on Latency

Let's talk about physics. The speed of light is finite. If your target demographic is Norway, hosting your FaaS infrastructure on a server in Virginia or even Ireland introduces a latency floor you cannot optimize away. A ping from Oslo to a CoolVDS datacenter in Europe will significantly outperform a roundtrip to a US-based cloud function. For real-time applications—inventory checks, dynamic pricing, or API aggregation—milliseconds equal conversion rates.

Furthermore, running your own KVM-based virtualization means you avoid the "noisy neighbor" effect common in shared hosting environments. When a neighboring tenant on a public cloud spikes their CPU usage, your functions shouldn't suffer. CoolVDS ensures resource isolation, meaning your compute power is yours alone. This predictability is essential when you are orchestrating hundreds of event-driven functions.

Conclusion

Building a private serverless platform in 2022 is not just about saving money; it is about reclaiming architectural sovereignty. It allows you to comply with strict data residency laws, reduce latency for your Nordic user base, and avoid the vendor lock-in traps that stifle innovation. You get the developer experience of the cloud with the control of bare metal.

Don't let slow I/O or legal ambiguity kill your project. Deploy a high-performance NVMe instance on CoolVDS today and build a serverless architecture that you actually own.