Console Login

Serverless Patterns in 2016: Why Microservices on NVMe VPS Beat Public Cloud FaaS

The "Serverless" Illusion: Architecting for Control and Speed

Let’s address the elephant in the server room. Everyone in 2016 is talking about AWS Lambda and the "Serverless" revolution. The promise? You upload code, it runs, you pay per millisecond. It sounds perfect for the lean startup or the agile enterprise. But if you have actually tried to run a high-traffic production workload on pure FaaS (Function as a Service), you know the reality is different.

Cold starts taking 3 seconds. API Gateway timeouts. Impossible debugging sessions. And the moment you need persistent connections—like WebSockets or heavy database transactions—the model falls apart.

As a CTO, I care about two things: predictable performance and data sovereignty. Here in Norway, relying on a black-box function running in us-east-1 is not a strategy; it’s a liability. The pragmatic "Serverless" pattern isn't about abandoning servers; it's about automating them to the point of invisibility while retaining the raw power of bare metal. Here is how we architect high-performance microservices that feel serverless to your devs but perform like iron.

The Architecture: Docker on Bare-Metal KVM

Instead of fragmentation across thousands of functions, the robust 2016 pattern is the Containerized Microservice Swarm. We use Docker (currently version 1.10) to package applications, but we host them on high-performance Virtual Dedicated Servers (VDS) rather than shared cloud instances.

Why? Noisy Neighbors. In a public cloud FaaS environment, your code fights for CPU cycles. On a CoolVDS instance, we use KVM virtualization. This means the RAM and CPU you pay for are reserved strictly for your kernel. No stealing.

1. The Gateway: Nginx as the Traffic Cop

You don't need an expensive API Gateway service. A well-tuned Nginx instance running on a low-latency node in Oslo can handle tens of thousands of concurrent connections. This gives you full control over caching, headers, and routing.

Here is a production-ready nginx.conf snippet optimized for microservice routing. This configuration assumes you are running Node.js or Go services on local ports, managed by Docker.

worker_processes auto;
worker_rlimit_nofile 100000;

events {
    worker_connections 4096;
    multi_accept on;
    use epoll;
}

http {
    # Optimization for high-throughput JSON APIs
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 30;
    types_hash_max_size 2048;

    # Upstream definition for a user-service microservice
    upstream user_backend {
        least_conn; # Route to the least busy container
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
        keepalive 64;
    }

    server {
        listen 80;
        server_name api.your-domain.no;

        location /users/ {
            proxy_pass http://user_backend;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header X-Real-IP $remote_addr;
            
            # Buffer tuning for performance
            proxy_buffers 8 16k;
            proxy_buffer_size 32k;
        }
    }
}

Notice the least_conn directive. This ensures that if one of your Docker containers gets bogged down, Nginx intelligently routes traffic to the freer instance. You cannot do this easily with standard Lambda setups.

2. The Storage Bottleneck: NVMe is Non-Negotiable

The biggest lie in hosting is "SSD Storage." Most providers give you SSDs, but they are network-attached (SAN) and throttled. In a microservices architecture, you have many small services hitting the disk for logs, database reads, and temporary files simultaneously.

If you have high I/O wait times, your "serverless" architecture grinds to a halt. At CoolVDS, we have standardized on local NVMe storage. The difference is staggering.

Metric Standard Cloud VPS (SATA SSD) CoolVDS (NVMe)
Random Read IOPS ~5,000 ~350,000+
Latency 2-5 ms < 0.1 ms
Database Restore (10GB) 15 minutes 2 minutes

3. Kernel Tuning for Microservices

When running many containers on a single host, you will hit Linux kernel limits quickly. The default settings in CentOS 7 or Ubuntu 14.04 are designed for general desktop use, not high-concurrency routing.

We see this constantly in support tickets: "My server is rejecting connections but CPU is low!" This is usually the nf_conntrack table filling up. Before you deploy your Docker containers, you must tune the host sysctl.conf.

Run this on your host node:

# Edit /etc/sysctl.conf

# Increase system-wide file descriptor limit
fs.file-max = 2097152

# Allow more connections to be handled
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65536

# Reuse closed sockets faster (Critical for API gateways)
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30

# Increase port range for outgoing connections
net.ipv4.ip_local_port_range = 1024 65000

# Apply changes
sysctl -p

Pro Tip: If you are using Docker 1.9+, ensure you are using the overlay storage driver instead of devicemapper. The performance overhead on devicemapper can eat up to 20% of your I/O on heavy write operations.

The Norwegian Context: Data Sovereignty

We are in a turbulent time for data privacy. The European Court of Justice invalidated the Safe Harbor agreement last October (2015). If you are a Norwegian business storing customer data on US-controlled servers (like AWS or Azure), you are currently operating in a legal grey zone while we wait for the "Privacy Shield" framework to be finalized.

Furthermore, Datatilsynet is becoming increasingly strict regarding where personal data of Norwegian citizens resides. Latency is another factor. Why round-trip your traffic to Frankfurt or London (30-40ms) when you can hit the NIX (Norwegian Internet Exchange) in Oslo in under 5ms?

By hosting your microservices on a CoolVDS instance in Oslo, you solve two problems:

  1. Compliance: Your data physically remains in Norway.
  2. Speed: You are network-adjacent to your customers.

Deploying a Simple Node.js Service with Docker

Let's look at how simple this "self-hosted serverless" workflow is. You don't need complex orchestration tools yet. A simple Docker wrapper is often enough for 2016 production loads.

Dockerfile:

FROM node:4.3.1-onbuild
# node:4.x is the current Long Term Support (LTS) version
EXPOSE 8080
CMD [ "npm", "start" ]

Deployment Command:

docker run -d \
  --name microservice-auth \
  --restart always \
  -p 8081:8080 \
  -e NODE_ENV=production \
  --log-driver=syslog \
  my-auth-service:v1

By using --restart always, we get basic self-healing. If the process crashes, Docker brings it back. It’s simple, effective, and doesn't require a 50-page manual.

Conclusion: Own Your Architecture

FaaS has its place—perhaps for background image resizing or cron jobs. But for your core business logic, the latency and lack of control are too high a price to pay. The most robust "Serverless" pattern today is actually Infrastructure as Code running on high-performance, dedicated VPS.

You get the automation developers love, with the raw NVMe performance and legal compliance that the business demands.

Don't let slow I/O kill your application's responsiveness. Spin up a CoolVDS KVM instance in Oslo today and see what 0.1ms disk latency does for your API.