Serverless Architecture Without the Lock-in: Building Event-Driven Microservices on KVM
Let’s cut through the marketing noise. "Serverless" is the buzzword of 2016. The promise of uploading a function to AWS Lambda and forgetting about the underlying OS is seductive. But as any battle-hardened sysadmin knows, there is no such thing as no server. There is just someone else's server, and usually, that server is a black box you can't debug, optimize, or legally control.
I recently consulted for a fintech startup in Oslo trying to move their payment processing to a public cloud FaaS (Function-as-a-Service) provider. The result? Unpredictable latency. We saw "cold starts" (the time it takes for the provider to spin up a container) hitting 2-3 seconds. In the world of high-frequency trading or instant payments, three seconds is an eternity. It’s the difference between a transaction and a timeout.
Furthermore, with the European Court of Justice invalidating the Safe Harbor agreement last October, relying on US-controlled public clouds is a compliance minefield for Norwegian companies. The Datatilsynet (Data Protection Authority) is watching. Data sovereignty is no longer optional; it is a requirement.
The pragmatic solution isn't to reject the architecture, but to reject the platform lock-in. We can build a "Serverless" style event-driven architecture using Docker containers and message queues, running on our own controlled infrastructure. This gives you the agility of microservices with the raw performance of bare-metal-like KVM instances.
The Architecture: Containers + Queues > FaaS
Instead of relying on an opaque cloud trigger, we build our own worker pools. The pattern is simple: API Gateway (Nginx) → Message Broker (Redis/RabbitMQ) → Worker Containers (Docker).
This setup allows you to scale workers horizontally across multiple CoolVDS instances within seconds, not minutes. Because the workers are always running (or paused in memory), latency is measured in milliseconds, not seconds.
1. The Gateway: Nginx as a Traffic Cop
We use Nginx not just as a web server, but as a reverse proxy that offloads SSL and buffers requests. On a standard VPS, you need to tune the kernel to handle the concurrent connections typical of microservices.
Here is the sysctl.conf configuration we deploy on CoolVDS nodes to handle high-throughput event ingestion:
# /etc/sysctl.conf optimizations for high concurrency
# Increase system file descriptor limit
fs.file-max = 2097152
# Increase the read/write buffer sizes for TCP
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the number of incoming connections
net.core.somaxconn = 65535
# Enable TCP Fast Open (requires kernel 3.7+)
net.ipv4.tcp_fastopen = 3
After applying this with sysctl -p, your Linux node is ready to accept thousands of simultaneous API calls without dropping packets. Try doing that kernel tuning on a managed cloud function. You can't.
2. The Engine: Docker 1.9 Networking
With the release of Docker 1.9 a few months ago, we finally got native overlay networking. This is a game-changer. It allows containers on different VPS nodes to talk to each other as if they were on the same LAN.
However, running Docker places massive stress on storage I/O. Every time a container spins up, it hits the disk. This is where "noisy neighbors" on cheap shared hosting kill you. If you are on a standard HDD or a shared SSD with limits, your build pipeline will crawl.
Pro Tip: Always check the disk scheduler on your host. On CoolVDS NVMe instances, we set the scheduler to `noop` or `deadline` because the NVMe controller handles the sorting better than the OS. Run cat /sys/block/vda/queue/scheduler to check yours.
3. The Glue: Asynchronous Task Queues
Real "Serverless" is about events. We use Python with Celery or Node.js with Bull, backed by Redis. Here is a stripped-down example of a Python worker that mimics a Lambda function but runs on your terms:
# worker.py
import os
from redis import Redis
from rq import Queue, Connection, Worker
# Connect to the Redis instance running on the private network
redis_conn = Redis(host='10.0.0.5', port=6379)
def process_image(file_path):
# Simulate CPU intensive task
print("Processing {} on Host {}".format(file_path, os.uname()[1]))
return True
if __name__ == '__main__':
with Connection(redis_conn):
q = Queue()
w = Worker(q)
w.work()
You deploy this worker inside a Docker container. Need more processing power? Spin up 50 more containers. With CoolVDS's KVM virtualization, you are getting dedicated CPU cycles, meaning your workers actually process at 100% speed, unlike the "burstable" nonsense you get elsewhere.
The Latency Truth: Oslo vs. Frankfurt
Physics is the one constraint we cannot architect around. If your users are in Norway, and your FaaS provider is in Frankfurt or Dublin (common for AWS/Azure), you are adding 30-50ms of round-trip latency (RTT) to every request. For a complex app with chained microservices, that latency compounds.
By hosting your container cluster in Oslo on CoolVDS, you drop that RTT to under 5ms via NIX (Norwegian Internet Exchange). For local businesses, this creates a snappiness that international giants simply cannot match.
Security: The "Bad Neighbor" Effect
Public clouds often use container-based isolation for their FaaS offerings. In 2016, container breakouts are rare but theoretically possible. If you are processing sensitive data—medical records, financial transactions—you want hardware virtualization.
This is why we strictly use KVM (Kernel-based Virtual Machine). Unlike OpenVZ, KVM provides a hardware-level abstraction. Your kernel is your kernel. Even if a neighbor on the rack gets DDoS'd or compromised, your memory space remains encrypted and isolated. In the post-Snowden, post-Safe Harbor era, this level of isolation is your best defense against data leakage.
Implementation Strategy
Don't jump blindly into the newest hype. If you want the developer experience of Serverless but the performance of bare metal, follow this stack:
- Infrastructure: Deploy 3x CoolVDS NVMe instances (1 Load Balancer, 2 Worker Nodes).
- Orchestration: Use Ansible 2.0 (released this week!) to configure the nodes.
- Containerization: Docker Compose for service definition.
- Data: PostgreSQL 9.4 for relational data, Redis for the event bus.
This gives you a platform that costs a fraction of the "pay-per-execution" model once you hit scale, and it keeps your data safely within Norwegian borders.
Ready to build? Don't let slow I/O bottleneck your workers. Deploy a high-performance KVM instance on CoolVDS today and see what sub-millisecond local latency does for your application.