Console Login

Serverless Architecture Patterns: Avoiding Vendor Lock-in with Self-Hosted FaaS

Serverless Architecture Patterns: Avoiding Vendor Lock-in with Self-Hosted FaaS

There is a dangerous misconception circulating in boardrooms across Oslo right now: "Serverless means we don't need operations anymore."

As a CTO, I look at the Total Cost of Ownership (TCO). While AWS Lambda or Azure Functions offer an attractive entry point for prototyping, the cost curve intersects rapidly with traditional infrastructure once you hit significant scale. Furthermore, for Norwegian businesses operating under strict GDPR mandates and Datatilsynet scrutiny, shipping data to a black-box execution environment in Frankfurt—or worse, the US—is a compliance nightmare waiting to happen.

Serverless is not a place; it is a workflow. It is an architectural pattern, not a billing model. In 2019, the most robust way to implement this pattern for high-traffic applications is not purely public cloud functions, but a Hybrid FaaS (Function-as-a-Service) model running on your own high-performance infrastructure.

The Latency & Lock-in Problem

When you commit to a provider-specific serverless ecosystem, you aren't just buying compute; you are buying into their proprietary event triggers, their IAM roles, and their latency limitations. If your users are in Norway, routing traffic through a hyperscaler's load balancer adds unnecessary milliseconds.

More critically, the "Cold Start" problem in public clouds is often outside your control. You cannot tune the kernel. You cannot swap the virtualization driver. You take what you are given.

By shifting to a self-hosted FaaS model using tools like OpenFaaS on top of KVM-based VPS instances, you regain control over the entire stack—from the OS kernel to the function execution time—while keeping your data on Norwegian soil.

Pattern 1: The "Iron-FaaS" (OpenFaaS on Docker Swarm/K8s)

This pattern involves deploying an open-source FaaS framework on a cluster of sturdy VPS nodes. We prefer OpenFaaS because of its container-centric design. It allows you to package any binary as a function, avoiding the runtime limitations of Lambda (e.g., waiting for Node 12 support).

For a production-ready cluster in Norway, we typically provision three CoolVDS instances connected via a private network. Why CoolVDS? Because OpenFaaS is I/O intensive during image pulls and function scaling. The NVMe storage backend provided here is mandatory to keep function startup times (cold starts) under 200ms.

Implementation Strategy

First, initialize a Docker Swarm (simpler than Kubernetes for small to medium setups, and very stable in Docker 19.03):

# On the Manager Node (CoolVDS Instance 1)
$ docker swarm init --advertise-addr 10.10.0.1

# On Worker Nodes (CoolVDS Instances 2 & 3)
$ docker swarm join --token SWMTKN-1-xx... 10.10.0.1:2377

Next, deploy the OpenFaaS stack. Note the placement constraints to ensure the gateway runs on the manager for stability.

$ git clone https://github.com/openfaas/faas
$ cd faas && ./deploy_stack.sh

# Verify the gateway is responding
$ docker service ls | grep gateway

This setup gives you a full serverless environment where you control the timeout limits. Need a function to run for 15 minutes processing video? You can configure that. Try doing that on the free tier of a public cloud.

Pattern 2: The Database Connection Proxy

One of the most painful lessons in serverless architecture is the "Connection Storm." If you scale to 1,000 concurrent functions, and each opens a connection to your PostgreSQL database, your database will crash. It is not a matter of if, but when.

Public clouds are slowly introducing proxies (like the recently announced RDS Proxy preview), but they are expensive and opaque. The pragmatic solution is running a dedicated connection pooler on a VPS that sits between your FaaS cluster and your database.

Pro Tip: Do not expose your database directly to the internet. Use a private network (VLAN) between your FaaS nodes and your database node to reduce latency and eliminate attack vectors. On CoolVDS, this internal traffic is unmetered.

We use PgBouncer for this. It is lightweight and can handle thousands of incoming connections while maintaining a small pool of actual connections to the database.

; /etc/pgbouncer/pgbouncer.ini
[databases]
* = host=127.0.0.1 port=5432

[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 10000
default_pool_size = 20

By setting pool_mode = transaction, you allow functions to reuse connections immediately after a transaction commits, significantly increasing throughput for high-concurrency workloads typical in serverless environments.

Pattern 3: The Hybrid Egress Gateway

A major limitation of public cloud functions is the lack of static outgoing IP addresses. If you need to connect to a partner API (e.g., a banking interface or legacy ERP in Oslo) that requires IP whitelisting, you are often forced into expensive NAT Gateway setups.

The cost-effective alternative is the Hybrid Egress pattern. You run your functions anywhere (even public cloud), but route traffic destined for secured APIs through a strictly secured Proxy VPS.

Here is a hardened Squid configuration to act as a whitelist-only gateway:

# /etc/squid/squid.conf
acl localnet src 10.0.0.0/8     # Accept traffic from your internal VPN/VPC
acl allowed_domains dstdomain .bank-api.no .partner-service.com

http_access allow localnet allowed_domains
http_access deny all

# Hide the fact that we are proxying
forwarded_for off
request_header_access Via deny all

This setup provides a single, static IP (your CoolVDS IP) for whitelisting, without the markup of "Enterprise" cloud add-ons.

Why Infrastructure Matters

The abstraction of "serverless" leaks the moment you hit a hardware limit. Disk I/O is the most common bottleneck. When a function container starts, it must extract layers from the disk. If your underlying host is running on spinning rust (HDD) or shared SATA SSDs with noisy neighbors, your wake-up times will be inconsistent.

This is why we architect on CoolVDS. The guarantee of NVMe storage means that the I/O wait times are negligible. In benchmarks comparing standard cloud instances to NVMe-backed KVM VPS, we see container start times improve by up to 40% purely due to disk throughput.

Feature Public Cloud FaaS Self-Hosted (CoolVDS)
Timeout Limit ~15 minutes (hard limit) Unlimited
Data Location Usually Germany/Ireland Norway (Local Compliance)
Cost at Scale Linear / Expensive Flat (Fixed VPS cost)
Cold Start Variable/Unpredictable Tunable (KVM/NVMe)

Conclusion

Serverless is a powerful paradigm for decoupling application logic, but it should not decouple you from common sense regarding costs and data sovereignty. By adopting tools like OpenFaaS or Knative on top of robust, local infrastructure, you gain the developer velocity of serverless without handing the keys to your kingdom to a hyperscaler.

If you are building the next generation of fintech or e-commerce in the Nordics, control your stack. Ensure your "serverless" architecture is actually running on servers that can handle the load.

Ready to build a compliant, high-performance FaaS cluster? Deploy your first NVMe-backed instance on CoolVDS today and experience the difference raw I/O power makes.