Serverless Architecture on Bare Metal: Surviving the Hype and Keeping Data in Norway
It is March 2016, and you cannot walk into a tech meetup in Oslo without hearing someone preach about the "Serverless revolution." AWS Lambda is the flavor of the month. The promise is seductive: upload your code, forget about the OS, and pay only for the milliseconds your function runs.
As a systems architect who has spent the last decade debugging kernel panics and optimizing I/O schedulers, I am skeptical. Not because the concept is badâevent-driven architecture is brilliantâbut because the implementations often ignore physics and geography. If your users are in Trondheim or Bergen, and your "serverless" function is warming up in a data center in Dublin (or worse, Virginia), you are introducing latency that no amount of code optimization can fix.
Furthermore, with the recent invalidation of the Safe Harbor agreement and the looming strictness from Datatilsynet regarding personal data storage, shipping your customer database to a US-owned public cloud is becoming a legal minefield. We need the agility of serverless patterns without the latency or the compliance headaches.
This article explores how to implement serverless architecture patternsâspecifically Microservices and Event Sourcingâon your own controlled infrastructure using Docker and Nginx. This is how we build high-performance systems on CoolVDS that beat the public cloud on both speed and cost.
The "Cold Start" Reality Check
The dirty secret of Function-as-a-Service (FaaS) right now is the cold start. When a function hasn't been called in a while, the provider spins down the container. The next request has to wait for provisioning, runtime initialization, and code loading. In our benchmarks, a Java-based Lambda function can take 3 to 5 seconds to respond to a cold request. For an e-commerce checkout flow, that is an eternity.
The solution? Own the container lifecycle. By running persistent microservices on a high-performance VPS, you eliminate cold starts entirely while retaining the modular benefits of serverless architecture.
Architecture Pattern: The Private Microservice Swarm
Instead of relying on a proprietary cloud vendor to route your events, we use a battle-tested combination: Nginx as the API Gateway and Docker (specifically Docker Swarm or Compose) to manage the service containers.
1. The Gateway (Nginx)
In this pattern, Nginx acts as the router. It terminates SSL (critical for HTTP/2 performance) and routes requests to specific local ports where your microservices are listening. Unlike a black-box cloud gateway, you have full control over timeouts, buffering, and caching.
Here is a production-ready snippet for nginx.conf tuned for high concurrency. Note the keepalive settings to the upstream, which significantly reduces TCP handshake overhead between your gateway and your services.
upstream auth_service {
server 127.0.0.1:3001;
keepalive 64;
}
upstream inventory_service {
server 127.0.0.1:3002;
keepalive 64;
}
server {
listen 80;
server_name api.coolvds-client.no;
# Buffer size tuning for JSON payloads
client_body_buffer_size 10K;
client_max_body_size 8m;
location /auth {
proxy_pass http://auth_service;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
}
location /inventory {
proxy_pass http://inventory_service;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
2. The Event Loop (Node.js)
For the services themselves, Node.js (currently v5.x) is the pragmatic choice in 2016 for I/O-bound tasks. It is lightweight and starts instantly compared to the JVM. Here is a stripped-down "function" running as a microservice. It does one thing: checks inventory.
const http = require('http');
const redis = require('redis');
const client = redis.createClient();
const server = http.createServer((req, res) => {
if (req.url === '/check' && req.method === 'GET') {
// Simulate DB lookup in Redis
client.get('item_42', (err, reply) => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
id: 42,
stock: reply,
node: process.env.HOSTNAME
}));
});
} else {
res.writeHead(404);
res.end();
}
});
server.listen(3002);
console.log('Inventory Service running on port 3002');
The Hardware Bottleneck: Why NVMe Matters
When you break a monolith into twenty microservices, you multiply your I/O operations. Every service logs data, reads configs, and hits the database. On standard SATA SSDs (or heaven forbid, spinning rust), this "I/O blender" effect kills performance. Latency spikes because the disk queue depth explodes.
Pro Tip: Check your disk latency with ioping. If you are seeing average seek times above 1ms on your VPS, your microservices architecture will feel sluggish regardless of how optimized your code is.
This is where infrastructure choice becomes binary. We built CoolVDS on NVMe storage because it leverages the PCIe bus directly. In high-load scenariosâlike a flash sale on a Magento store split into microservicesâNVMe sustains 40,000+ IOPS where standard SSDs choke at 5,000. For a "serverless" style architecture where many small processes are constantly reading and writing, NVMe is not a luxury; it is a requirement.
Docker: The Engine of Independence
Docker has matured rapidly. With version 1.10 released last month, we now have much better networking capabilities. We can use docker-compose to orchestrate this environment on a single powerful VDS, keeping latency effectively at zero (localhost).
Here is how we wire it up in docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.0
restart: always
inventory:
build: ./inventory-service
ports:
- "3002:3002"
links:
- redis
environment:
- NODE_ENV=production
auth:
build: ./auth-service
ports:
- "3001:3001"
environment:
- NODE_ENV=production
With a simple docker-compose up -d, you have deployed a microservices architecture. No vendor lock-in. No "per request" billing surprises. You own the stack.
Data Sovereignty and The Norwegian Advantage
Let's talk about the elephant in the room: Data Location. If you use AWS Lambda, your data is processing in Ireland or Frankfurt at best. With the current legal uncertainty in Europe following the Schrems ruling, Norwegian businesses are under pressure to keep sensitive user data within national borders.
By hosting your own Dockerized "serverless" setup on a provider like CoolVDS, you ensure:
- Legal Compliance: Data remains physically in Norway/Europe, simplifying GDPR readiness.
- Latency: Round-trip time (RTT) from Oslo to a local data center is ~2ms. To Frankfurt, it is ~25-30ms. That difference compounds with every API call.
- Cost Predictability: You pay a flat rate for the VDS resources (CPU/RAM). If your function gets hit 10 million times by a DDoS or a viral post, your bill doesn't skyrocket; your server just works harder.
Conclusion
Serverless is a design pattern, not just a product sold by Amazon. It represents the decoupling of logic from the monolith. You canâand often shouldâachieve this decoupling on your own terms.
If you are building the next generation of Norwegian tech, don't sacrifice performance for buzzwords. Build on infrastructure that respects your need for speed and sovereignty.
Ready to build? Deploy a Docker-optimized, NVMe-powered instance on CoolVDS today. We spin up in under 55 seconds, giving you the raw power to run your own cloud, your way.