Console Login

Serverless Patterns in 2023: Escaping Vendor Lock-in with Hybrid Architectures

Serverless Patterns in 2023: Escaping Vendor Lock-in with Hybrid Architectures

There is a dangerous misconception circulating in CTO circles from Oslo to Trondheim: that "Serverless" is synonymous with "AWS Lambda" or "Azure Functions." It is not. Serverless is an operational model, not a product SKU. And in January 2023, relying exclusively on US hyperscalers for your event-driven architecture is becoming a liability—both financially and legally.

We have all seen the bills. A startup scales its image processing microservice, and suddenly the monthly invoice jumps from $50 to $2,000 because of execution time limits and API gateway overage charges. Add to this the lingering headache of Schrems II and the scrutiny from Datatilsynet regarding data transfer to US-owned entities, and the "fully managed" cloud starts looking less like a dream and more like a compliance trap.

The pragmatic architecture for 2023 isn't about abandoning serverless; it's about owning it. By decoupling the FaaS (Function as a Service) layer from the underlying infrastructure, we gain cost predictability, raw NVMe performance, and total data sovereignty.

The Architecture: Self-Hosted FaaS on K8s

The most robust pattern we are seeing deploy this year involves running a lightweight Kubernetes distribution (like K3s) on high-performance VPS instances, orchestrated by an open-source FaaS framework like OpenFaaS or Knative. This allows you to define functions in standard Docker containers.

Why do this? Cold starts. On public cloud, you are at the mercy of their scheduler. On a dedicated KVM slice with CoolVDS, you control the oversubscription. You can keep your containers warm without paying "provisioned concurrency" premiums.

Implementation: The Core Stack

Let's look at a real-world deployment script. We assume you are running a CoolVDS instance with Debian 11 or Ubuntu 22.04. First, we establish a lightweight cluster backbone using K3s. It removes the bloat of full K8s, which is critical when we want our CPU cycles focused on function execution, not cluster management.

# 1. Install K3s (Lightweight Kubernetes)
curl -sfL https://get.k3s.io | sh -

# 2. Check node status
sudo k3s kubectl get node

# 3. Install arkade (OpenFaaS marketplace installer)
curl -sLS https://get.arkade.dev | sudo sh

# 4. Deploy OpenFaaS to the cluster
arkade install openfaas

Once the framework is up, the focus shifts to the function gateway. The default timeouts in many ingress controllers are too aggressive for heavy data processing typical in Nordic fintech or energy sector workloads. You need to tune the gateway deployment.

Here is a critical configuration often missed in the values.yaml for the OpenFaaS gateway, specifically adjusting the read/write timeouts to handle long-running synchronous functions:

gateway:
  # Extend timeouts for heavy processing tasks
  readTimeout: "60s"
  writeTimeout: "60s"
  upstreamTimeout: "55s"
  replicas: 2
  
  # Tune the queue worker for high throughput
  queueWorker:
    ackWait: "60s"
    maxInflight: 150

The "Hybrid Event Pump" Pattern

A purely self-hosted approach works for steady loads, but what about massive spikes? This is where the Hybrid Event Pump comes in. In this pattern, your core business logic resides on your CoolVDS NVMe instances in Oslo (low latency, compliant), while you use a public cloud hook only as an ingestion funnel during black swan events, piping data back to your secure core via queues like NATS or RabbitMQ.

Pro Tip: When running queues on virtualized hardware, Disk I/O is usually the bottleneck. Standard HDD VPS setups will choke under high message ingestion rates. We strictly recommend NVMe storage for the persistence layer of NATS/Kafka. If `iowait` climbs above 5%, your functions will time out regardless of CPU power.

Optimizing the Runtime: Node.js 18

With Node.js 18 becoming Active LTS recently (Oct 2022), we have access to the global fetch API and improved V8 engine performance. However, memory limits in containerized functions are tricky. A standard container might crash if it hits the cgroup limit hard.

Here is how a battle-tested handler.js should look to respect memory boundaries while maintaining keep-alive connections for database pooling (essential for SQL interaction):

'use strict'

const pg = require('pg');
// Initialize pool outside the handler to maintain connection warmth
const pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL,
  max: 10, // Keep this lower than your VPS connection limit
  idleTimeoutMillis: 30000
});

module.exports = async (event, context) => {
  const client = await pool.connect();
  try {
    const start = performance.now();
    const res = await client.query('SELECT * FROM audit_logs WHERE id = $1', [event.body.id]);
    const duration = performance.now() - start;
    
    return context
      .status(200)
      .headers({ 'X-Duration-Debug': duration })
      .succeed(res.rows[0]);
  } catch (err) {
    console.error('Transaction failed', err);
    return context.fail(err);
  } finally {
    client.release();
  }
}

Cost and Compliance Comparison

For a Norwegian company processing personal data (GDPR), the location of the physical server is not a trivial detail. It is a legal requirement. Here is how the models stack up:

Feature Public Cloud FaaS Self-Hosted (CoolVDS)
Data Residency Opaque (Region selection helps, but US CLOUD Act applies) Transparent (Oslo Datacenter)
Cost Model Per request + GB-seconds (Unpredictable) Flat Monthly Rate (Predictable)
Cold Starts Variable (100ms - 2s) Zero (Control over Keep-Warm)
Hardware Access Abstracted Direct KVM/NVMe Access

The Infrastructure Reality Check

Running your own FaaS platform requires infrastructure that doesn't steal CPU cycles. In cheap shared hosting environments, "neighbor noise" causes function execution variance. If your neighbor mines crypto, your API latency spikes.

This is where the distinction between container-based VPS (OpenVZ/LXC) and hardware virtualization (KVM) becomes critical. For Docker-inside-VPS architectures, you need KVM. CoolVDS provides KVM instances with NVMe storage as standard. The high IOPS of NVMe are non-negotiable when pulling multiple container layers simultaneously during a function update.

Monitoring the Stack

You cannot fix what you cannot measure. In a distributed FaaS environment, you need centralized logging. While SaaS tools exist, a self-hosted Loki + Grafana stack keeps the data local. Here is a snippet for the Prometheus config to scrape OpenFaaS metrics specifically:

scrape_configs:
  - job_name: 'openfaas-endpoints'
    kubernetes_sd_configs:
    - role: endpoints
      namespaces:
        names:
          - openfaas
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name

This configuration ensures that as you scale functions up and down, Prometheus automatically discovers the new pods and scrapes their execution metrics.

Conclusion

Serverless architecture is maturing. In 2023, we are moving past the "hype" phase where we moved everything to Lambda, and into the "pragmatic" phase where we balance developer velocity with cost control and data sovereignty.

If you are building for the Nordic market, latency to Oslo and GDPR compliance are your primary constraints. A self-hosted FaaS cluster on dedicated-performance KVM slices offers the best of both worlds: the developer experience of serverless with the economic stability of bare-metal.

Don't let cloud billing algorithms dictate your architecture. Spin up a KVM instance on CoolVDS today, deploy K3s, and take back control of your stack.