Serverless Without the Lock-in: Building High-Performance Microservices on NVMe VPS
It is March 2017, and if I hear the word "Serverless" one more time at a tech meetup in Oslo, I might just snap. The marketing departments of the "Big Three" cloud providers are working overtime to convince us that managing servers is a relic of the past. They want us to believe that decomposing our applications into proprietary functions (FaaS) is the only path forward. They promise infinite scalability and "pay-per-execution" billing.
But they conveniently leave out the fine print: Vendor Lock-in, Cold Start Latency, and Unpredictable Costs.
As a DevOps engineer who has had to explain to a CFO why our monthly cloud bill fluctuated by 300% because of a recursive loop in a Lambda function, I prefer deterministic infrastructure. Furthermore, with the GDPR (General Data Protection Regulation) looming on the horizon for 2018 implementation, sending Norwegian user data to opaque execution environments in us-east-1 or even Frankfurt is becoming a legal headache we don't need. Data sovereignty matters.
This article outlines a pragmatic architecture pattern: Self-Hosted Serverless. We will build a container-based microservices environment that mimics the benefits of FaaS (isolation, modularity) but retains the control, cost-predictability, and raw I/O performance of a dedicated VPS environment.
The Architecture: Docker, Nginx, and NVMe
To replicate the agility of serverless without the cloud tax, we rely on Docker (specifically Docker Engine 1.13, which recently introduced Swarm mode, though for single-host resilience, standard Compose is often sufficient). The bottleneck in this architecture is almost always Disk I/O. When you spin up 50 microservice containers, the simultaneous read/write operations can crush a standard SATA SSD, let alone a spinning HDD.
Pro Tip: Do not attempt this architecture on budget hosting with shared storage. The "Noisy Neighbor" effect will cause your API latency to spike unpredictably. We use CoolVDS KVM instances specifically because they provide local NVMe storage. In our benchmarks, NVMe reduces container startup time by roughly 40% compared to standard SSDs.
1. The Gateway Layer (Nginx)
In a public cloud FaaS, the API Gateway handles routing. On our CoolVDS instance, we use Nginx. It is battle-tested, supports HTTP/2, and handles thousands of concurrent connections with minimal RAM usage.
We need Nginx to dynamically route requests to our running containers. While tools like `nginx-proxy` exist, manually configuring your upstream blocks gives you granular control over timeouts and buffering—crucial for mimicking the "timeout" behavior of serverless functions.
http {
upstream microservice_auth {
# In a Docker network, we resolve by container name
server auth_service:3000;
keepalive 64;
}
upstream microservice_image_process {
server img_processor:5000;
}
server {
listen 80;
server_name api.yourdomain.no;
location /auth/ {
proxy_pass http://microservice_auth/;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
# Critical: Fail fast. Don't let users hang if the container is dead.
proxy_connect_timeout 2s;
proxy_read_timeout 5s;
}
location /process/ {
proxy_pass http://microservice_image_process/;
proxy_http_version 1.1;
}
}
}
2. The "Function" Container
Instead of writing a proprietary handler, we write a micro-webserver. This ensures your code is portable. You can move this container from CoolVDS to a laptop or a bare-metal rack without changing a single line of code. Try doing that with an Azure Function.
Here is a lean Node.js 6.10 (LTS) example acting as a "function":
// microservice.js
const http = require('http');
const server = http.createServer((req, res) => {
if (req.method === 'POST' && req.url === '/process') {
let body = '';
req.on('data', chunk => {
body += chunk.toString();
});
req.on('end', () => {
// Simulate intense logic
const result = JSON.parse(body);
result.processed_at = new Date().toISOString();
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(result));
});
} else {
res.writeHead(404);
res.end();
}
});
server.listen(3000, () => {
console.log('Function container ready on port 3000');
});
3. Orchestration with Docker Compose
To tie it all together, we use a version 2 `docker-compose.yml` file. This defines our infrastructure as code.
version: '2'
services:
gateway:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
links:
- auth_service
- img_processor
restart: always
auth_service:
build: ./auth
environment:
- DB_HOST=10.0.0.5 # Private networking IP
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
img_processor:
image: my-registry/img-proc:v1
mem_limit: 512m
cpus: 0.5
The Latency Advantage in Norway
Let's talk about physics. If your users are in Oslo or Bergen, and your "serverless" function is executing in a datacenter in Ireland or Frankfurt, you are adding 30-50ms of round-trip time (RTT) just on network latency. That is before the function even warms up.
By hosting on a CoolVDS instance physically located in Norway, you are often 2-5ms away from your local users via the NIX (Norwegian Internet Exchange). For real-time applications or high-frequency trading bots, that difference is massive.
Handling Persistence and State
One valid criticism of this approach is state management. In a pure serverless world, you push state to a database service. We do the same here, but we keep it local for speed.
Running a database inside a container is controversial in 2017, but for development or small-scale apps, it works if you mount volumes correctly. However, for production, install MariaDB or PostgreSQL directly on the host OS of your VPS. This removes the container overlay overhead for your most I/O-heavy process.
Recommended `sysctl.conf` tweaks for high-load container hosts:
# /etc/sysctl.conf
# Increase max open files for heavy concurrency
fs.file-max = 2097152
# Optimize kernel for docker networking
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# Avoid swap churn (critical for latency)
vm.swappiness = 10
Security: The Hidden Benefit
With CoolVDS, you get a dedicated IP and full root access. You can configure `iptables` or `ufw` to whitelist only your office IP for SSH access. You can implement DDoS protection at the network edge. In a shared serverless environment, you are trusting the cloud provider's hypervisor isolation completely. While usually safe, "Spectre"-style theoretical vulnerabilities in CPU caches are a topic of academic discussion that might become real headaches in the future. Isolation at the VM level (KVM) is a stronger security boundary than isolation at the container level.
Conclusion: Own Your Stack
Serverless has its place for event-driven glue code. But for core business logic, the cost/performance ratio of a well-tuned VPS running Docker remains unbeatable in 2017. You get predictable billing, full data sovereignty compliance for Norwegian clients, and the raw speed of NVMe storage.
Don't let the hype dictate your architecture. If you are ready to build a system that responds in milliseconds rather than waiting for a cold container to boot in a foreign datacenter, it is time to get your hands dirty.
Ready to optimize? Deploy a high-performance NVMe instance on CoolVDS today and see what your code can actually do.