Console Login

Microservices in Production: Surviving the Move from Monolith to Distributed Hell in 2016

Microservices in Production: Surviving the Move from Monolith to Distributed Hell

Everyone in the developer community is talking about breaking the monolith. We read the Netflix engineering blog, we look at Martin Fowler’s diagrams, and we think: "Yes, this is the way." But let me tell you something from the trenches of a recent migration for a major Nordic e-commerce platform: Microservices trade code complexity for operational complexity.

If you treat a distributed system like a monolith running on localhost, you will fail. The network is not reliable. Latency is not zero. Bandwidth is not infinite. In 2016, we finally have the tools to manage this—Docker 1.12 has just dropped with Swarm mode baked in, and Kubernetes is maturing—but the underlying infrastructure remains the single biggest point of failure.

The Architecture: API Gateway & Service Discovery

You cannot expose twenty different services directly to the public internet. You need a gatekeeper. For our clients targeting the Norwegian market, we stick to the battle-tested combination of Nginx as an API Gateway and Consul for service discovery.

The Gatekeeper: Nginx Configuration

Using Nginx allows us to terminate SSL, handle HTTP/2 (which is finally stable in 1.10), and route traffic to backend containers. Here is a snippet from a production nginx.conf used to route traffic based on URI paths to different upstreams, while aggressively caching static assets.

upstream user_service {
    server 10.0.0.5:8080;
    server 10.0.0.6:8080;
    keepalive 64;
}

upstream order_service {
    server 10.0.0.7:9090;
    keepalive 64;
}

server {
    listen 443 ssl http2;
    server_name api.example.no;

    ssl_certificate /etc/letsencrypt/live/api.example.no/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.no/privkey.pem;

    location /v1/users {
        proxy_pass http://user_service;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
    }

    location /v1/orders {
        proxy_pass http://order_service;
        proxy_set_header X-Request-ID $request_id; # Critical for tracing!
    }
}
Pro Tip: Notice the proxy_set_header X-Request-ID $request_id;? In a microservices environment, debugging is a nightmare. Passing a unique ID through every service call is the only way you can trace a request from the gateway to the database. Do not skip this.

Service Discovery with Consul

Hardcoding IP addresses in 2016 is a sin. We use HashiCorp's Consul. When a new Docker container spins up, it registers itself. Here is a typical agent configuration for a node in our cluster.

{
  "datacenter": "oslo-dc1",
  "data_dir": "/var/lib/consul",
  "log_level": "INFO",
  "node_name": "worker-01",
  "server": false,
  "bind_addr": "10.0.0.5",
  "join": ["10.0.0.1", "10.0.0.2", "10.0.0.3"]
}

The Hidden Killer: Infrastructure Latency

This is where most "cloud" deployments fall apart. In a monolith, a function call is in-memory. It takes nanoseconds. In a microservices architecture, that function call becomes an HTTP request over the network. If your database is in Frankfurt and your app server is in Oslo, you are adding 20-30ms of latency to every single query.

For Norwegian businesses, data sovereignty and latency are paramount. The EU-US Privacy Shield has just replaced Safe Harbor, and the Datatilsynet (Norwegian Data Protection Authority) is watching closely. Hosting your data within Norway isn't just about compliance; it's about physics.

Why "Cheap" VPS Providers Fail Microservices

Many budget providers use OpenVZ virtualization. OpenVZ shares the host kernel with the guests. This is a disaster for Docker. Docker relies on specific kernel features (cgroups, namespaces). If the host kernel is outdated or restricted, your containers will crash randomly.

This is why at CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine). You get your own dedicated kernel. You can install Ubuntu 16.04, load your own kernel modules, and run Docker exactly as it was intended. No "noisy neighbor" effect stealing your CPU cycles when you are trying to serialize JSON.

The Storage Bottleneck: Logging Aggregation

When you split one app into ten services, you split one log file into ten. You need to ship these logs to a central ELK stack (Elasticsearch, Logstash, Kibana). This generates a massive amount of random Write I/O.

Standard spinning HDDs (SAS/SATA) cannot handle the IOPS required for an ELK stack receiving logs from 50 containers. The queue depth increases, iowait spikes, and your API starts timing out.

Storage Type Random Write IOPS Suitability for Microservices
7.2k SATA HDD ~80 Unusable
Standard SSD ~5,000 Acceptable for dev
CoolVDS NVMe ~20,000+ Production Ready

Deployment: The Docker Compose Way

While everyone is excited about Kubernetes 1.3, for many teams, it is overkill. Docker Compose (version 2 syntax) allows you to define your stack cleanly and is often enough for a single-host high-performance deployment.

version: '2'
services:
  redis:
    image: redis:3.2-alpine
    networks:
      - backend
    sysctls:
      net.core.somaxconn: 1024

  app:
    image: my-registry.com/app:v1.4
    depends_on:
      - redis
    environment:
      - DB_HOST=db.production.local
    networks:
      - backend
      - frontend
    deploy:
      resources:
        limits:
          cpus: '0.50'
          memory: 512M

networks:
  frontend:
  backend:

Running this on a CoolVDS instance ensures that the sysctls tuning actually works, because KVM gives you that level of control.

Conclusion

Microservices are not a magic bullet. They require discipline, observability, and robust infrastructure. If your provider suffers from network jitter or slow disk I/O, your distributed system will feel slower than the monolith you replaced.

Ensure your data stays close to your users (NIX connectivity), your virtualization offers true isolation (KVM), and your storage can handle the write punishment (NVMe). Don't let IOPS wait time kill your project.

Ready to architect for performance? Deploy a KVM-based CoolVDS instance in Oslo today and experience the difference low latency makes.