Serverless Architecture Patterns: The Hype, The Reality, and The Hybrid Fix
It is May 2017. The tech conferences in Oslo are buzzing with one word: Serverless. The promise is seductive—upload your code, forget the infrastructure, and pay only for the milliseconds you use. If you believe the marketing brochures from the big US cloud vendors, managing your own servers is becoming a relic of the past, like SVN or Flash.
But I have been in the trenches long enough to know that there is no such thing as "No Ops." There is only "Different Ops." And when you are building mission-critical systems for the Norwegian market, blindly adopting AWS Lambda or Azure Functions introduces two massive headaches: Latency and Lock-in.
This isn't a rant against progress. It is a guide on how to implement event-driven, "serverless" patterns without losing control of your data or your performance metrics. Let's look at how we can achieve the agility of functions while retaining the raw power of a KVM-based VPS.
The Latency Trap: Oslo vs. Frankfurt
Physics is stubborn. If your users are in Norway and your "serverless" functions are firing up in a data center in Frankfurt (eu-central-1) or Ireland (eu-west-1), you are adding network hops. For a background image processing job, nobody cares. For a real-time banking API or a high-traffic e-commerce checkout, those milliseconds pile up.
Pro Tip: Run a mtr (My Traceroute) from a standard Telenor or Altibox connection to AWS Frankfurt. Then run it to a CoolVDS instance in Oslo. The difference isn't just speed; it's consistency. Jitter kills user experience faster than raw latency.
Pattern 1: The "Heavy Lifter" Hybrid
The most robust architecture I am seeing deployed in 2017 involves a hybrid approach. You keep your stateful, high-I/O components (Databases, Core API monoliths) on dedicated, high-performance VPS instances, and use FaaS (Function as a Service) only for the "glue" logic.
However, running a database on a transient container or a serverless platform is professional suicide. You need guaranteed IOPS.
The Configuration
On a CoolVDS NVMe instance, we configure the core application (let's say, a Django or Laravel app) to handle the synchronous HTTP requests. We offload heavy tasks to a local queue (Redis) processed by worker nodes. This mimics serverless scalability without the cold-start penalty.
Here is how you tune Redis on a Linux VPS to handle high-throughput event queues without choking on disk persistence:
# /etc/sysctl.conf optimization for high-load Redis
vm.overcommit_memory = 1
net.core.somaxconn = 1024
# Make sure to disable Transparent Huge Pages (THP) for Redis latency
# Add to /etc/rc.local:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
Pattern 2: The "Self-Hosted" Serverless (Docker Swarm)
If you love the developer experience of "git push deploy" but hate the idea of your data leaving Norwegian soil (thanks, Datatilsynet), the best pattern in 2017 is running your own FaaS layer using Docker Swarm. It is simpler than Kubernetes 1.6 for small teams and incredibly stable.
We can use open-source tools like the FaaS stack (which is gaining traction recently) to deploy functions on our own CoolVDS infrastructure.
Deploying the Stack
First, ensure you are running Docker 17.03 or later. Initialize Swarm on your primary VPS:
$ docker swarm init --advertise-addr $(hostname -i)
Now, define a stack file func_stack.yml. We will use a simple overlay network to route traffic between the gateway and the functions.
version: "3"
services:
gateway:
image: functions/gateway:0.7.0
ports:
- "8080:8080"
networks:
- functions_net
deploy:
placement:
constraints:
- 'node.role == manager'
# A sample function for image resizing
resizer:
image: functions/resizer:latest
networks:
- functions_net
deploy:
replicas: 3
restart_policy:
condition: on-failure
networks:
functions_net:
driver: overlay
Deploy this to your swarm:
$ docker stack deploy -c func_stack.yml func_lab
This setup gives you the auto-healing properties of serverless. If a resizer container crashes, Swarm respawns it instantly. If you need more capacity, you scale the replicas. Crucially, you control the hardware.
The I/O Bottleneck: Why Hardware Matters
Whether you run authentic serverless or a containerized simulation, your bottleneck in 2017 is almost always Disk I/O. When 50 containers try to write logs simultaneously or pull images, a standard SATA SSD creates a queue depth that stalls your CPU.
This is where the underlying infrastructure becomes the "silent killer" of architecture. Most budget VPS providers throttle IOPS. At CoolVDS, we use NVMe storage arrays passed through via KVM. When you run docker pull, you aren't waiting on a noisy neighbor.
Benchmarking Disk Latency for Containers
Don't take my word for it. Run fio on your current host. If you are seeing random write IOPS below 10k, your "serverless" architecture will feel sluggish.
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=512M --numjobs=2 --runtime=240 --group_reporting
Security and Compliance (GDPR is Coming)
We all know the General Data Protection Regulation (GDPR) enforcement date is set for next year (2018). Architects need to be paranoid today. When you use a public cloud FaaS, you are often ceding control over exactly where your data is processed. "Region" is a broad term.
By hosting your event-driven architecture on a Norwegian VPS, you simplify your compliance map drastically. You know the rack, you know the datacenter, and you know the jurisdiction. For clients dealing with sensitive health or financial data in Oslo, this isn't optional—it is a requirement.
Summary: Pragmatism Wins
Serverless patterns—decoupling logic, event-driven flows, ephemeral compute—are brilliant. But renting them from a mega-vendor isn't the only way to use them.
By leveraging Docker Swarm and high-speed NVMe VPS instances, you can build a system that is just as agile but significantly faster and more predictable. You avoid the cold starts, you avoid the US-cloud latency penalty, and you keep your budget flat.
Ready to build? Don't let slow I/O kill your Docker swarm. Spin up a high-performance NVMe instance on CoolVDS today and deploy your stack in under 60 seconds.