Serverless Patterns for the Control Freak
It is December 2016. The industry is currently losing its collective mind over "Serverless." AWS Lambda has matured, Azure Functions is catching up, and the prevailing narrative is that managing servers is a relic of the past. The promise is seductive: write code, upload it, and never patch a kernel again.
I have spent the last six months migrating a high-traffic media processing pipeline from a legacy monolith to a pure FaaS (Function-as-a-Service) architecture. I am here to tell you that the "No Ops" dream is a lie. While event-driven architectures are brilliant, abdicating control of your infrastructure comes with a steep price tag—both in monthly bills and milliseconds of latency.
If you are serving customers in Oslo or Stavanger, routing traffic through a public cloud function in Frankfurt or Ireland introduces unnecessary round-trip time. Here is how we build "Serverless" behavior without losing the raw power of dedicated resources.
The Cold Start Problem and The "Cloud Bill" Shock
The biggest issue we face with public FaaS providers is the "cold start." If your function hasn't triggered in the last few minutes, the provider has to spin up a container, load your runtime, and then execute code. For a Python or Node.js worker, this might add 200ms. For Java? It can be seconds.
Furthermore, FaaS is billed by execution time/memory. I recently audited a client who moved their image resizing queue to Lambda. They were processing 500,000 images a month. Their bill was 3x higher than when they ran two robust VPS instances running a simple Celery worker queue.
Pattern 1: The "Self-Hosted Serverless" (Worker Pattern)
You can achieve the scalability and decoupling of serverless without the vendor lock-in. The pattern is simple: Message Broker + Containerized Workers.
By using Docker (which has stabilized significantly with version 1.12+ and Swarm Mode this year), we can deploy worker environments that consume tasks from a queue. This gives you predictable pricing (the fixed cost of the VPS) and instant execution (warm containers).
The Infrastructure Stack
- Broker: RabbitMQ or Redis (for lower latency).
- Compute: CoolVDS NVMe Instances (Docker Hosts).
- Orchestration: Docker Swarm or Ansible.
Configuration: The Worker Setup
Here is a battle-tested docker-compose.yml (version 2 syntax) for setting up a robust worker environment. Note the restart policies.
version: '2'
services:
rabbitmq:
image: rabbitmq:3.6-management
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS}
networks:
- backend
worker:
build: ./worker-app
image: my-image-processor:latest
restart: always
environment:
- BROKER_URL=amqp://admin:${RABBITMQ_PASS}@rabbitmq:5672//
deploy:
replicas: 4
update_config:
parallelism: 2
delay: 10s
networks:
- backend
depends_on:
- rabbitmq
networks:
backend:
driver: bridge
This setup runs 4 worker replicas. If one crashes, Docker restarts it instantly. It behaves exactly like a serverless function, but it runs on your hardware, under your control.
The Hardware Reality: Why I/O Matters
In 2016, we are seeing a shift from HDD to SSD, and now to NVMe. If you are building your own worker cluster, standard SSDs might bottleneck if you are doing heavy I/O operations (like video transcoding or log aggregation).
When we deploy these worker clusters on CoolVDS, we specifically utilize the NVMe storage tiers. A message queue like RabbitMQ is disk-sensitive. If the disk queue backs up because of slow write speeds, your entire architecture chokes.
Pro Tip: Always tune your Linux kernel for high-throughput networking if you are handling thousands of events per second. Add this to your /etc/sysctl.conf:
# Increase system file descriptor limit
fs.file-max = 100000
# TCP optimization for high-load
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_max_syn_backlog = 5000
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_tw_reuse = 1
Apply these with sysctl -p. Most default VPS installations (CentOS 7 or Ubuntu 16.04) come with conservative defaults that are not optimized for microservices communication.
Data Sovereignty and the "Schrems" Effect
We need to talk about where your data lives. With the EU Data Protection Directive and the upcoming GDPR (General Data Protection Regulation) looming on the horizon for 2018, legal compliance is becoming a technical requirement.
If you use a US-based "Serverless" provider, you are often shipping data to processing centers outside of Norway. Datatilsynet (The Norwegian Data Protection Authority) is becoming increasingly strict about data transfers.
Hosting your worker nodes on a Norwegian VPS ensures that:
- Latency is minimal: Traffic stays within the NIX (Norwegian Internet Exchange), ensuring sub-5ms ping times to Norwegian users.
- Compliance: Data processing happens on soil you trust, under laws you understand.
The Verdict: Hybrid is the Future
Do not buy the hype that you must delete all your servers. "Serverless" is an architectural pattern, not just a product you buy from Amazon.
Use public FaaS for glue code—sending an email, triggering a backup. But for your core business logic, the heavy lifting, and the high-performance tasks, build a worker cluster on robust Virtual Dedicated Servers. You get the raw performance of bare metal with the flexibility of virtualization.
Ready to build a low-latency worker cluster? Don't let noisy neighbors on shared hosting slow down your queues. Deploy a high-performance NVMe instance on CoolVDS today and keep your data fast, secure, and in Norway.