Console Login

Microservices in Production: 3 Architecture Patterns That Actually Work (And Why Your Infrastructure Matters)

Microservices in Production: 3 Architecture Patterns That Actually Work

I spent last weekend debugging a race condition that only appeared when network latency spiked above 50ms. That is the reality of microservices. Everyone wants to be Netflix, but nobody wants to manage the chaos that comes with splitting a perfectly good monolith into fifty fragmentation grenades.

If you are deploying microservices in 2016, you are likely using Docker. You might be experimenting with Kubernetes 1.2 or relying on Docker Swarm. But tools are not architecture. Without the right patterns, you are just building a distributed monolith—harder to debug, slower to deploy, and impossible to scale.

Let's look at the patterns that keep systems stable, and the specific infrastructure configurations required to support them.

1. The API Gateway Pattern (Nginx as the Shield)

Direct client-to-service communication is a disaster. If your mobile app talks directly to your Inventory Service, Pricing Service, and Auth Service, you are leaking implementation details and creating a chatty network nightmare. You need a gatekeeper.

We use Nginx. It is boring, stable, and faster than the JVM-based alternatives often pushed by enterprise vendors. In a microservices setup, Nginx acts as the reverse proxy, handling SSL termination and routing.

Here is a production-ready snippet for nginx.conf that handles upstream load balancing with keepalives. Note the least_conn directive—essential when services have varying response times.

http {
    upstream inventory_service {
        least_conn;
        server 10.0.0.10:8080 max_fails=3 fail_timeout=30s;
        server 10.0.0.11:8080 max_fails=3 fail_timeout=30s;
        keepalive 64;
    }

    server {
        listen 443 ssl http2;
        server_name api.coolvds-client.no;

        ssl_certificate /etc/letsencrypt/live/api.coolvds-client.no/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/api.coolvds-client.no/privkey.pem;

        location /inventory/ {
            proxy_pass http://inventory_service/;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}
Pro Tip: Enable HTTP/2. It was finalized last year (2015) and Nginx 1.9.5+ supports it. It dramatically reduces latency for mobile clients over erratic 3G/4G connections by multiplexing requests.

2. Service Discovery (Consul is King)

Hardcoding IP addresses in 2016 is negligence. When a Docker container dies and respawns on a different host, its IP changes. Your Nginx configuration cannot be static. You need Service Discovery.

We prefer HashiCorp's Consul over Etcd for its built-in DNS interface and health checking. It allows your services to register themselves automatically.

Here is how you start a Consul agent on a node. Do not run this in dev mode in production.

docker run -d --net=host \
    --name=consul-agent \
    consul:0.6.4 agent -bind=$(hostname -i) -retry-join=10.0.0.5

Once running, you can define a service definition JSON to register your web app:

{
  "service": {
    "name": "web-frontend",
    "tags": ["production", "norway-region"],
    "port": 80,
    "check": {
      "script": "curl localhost:80 >/dev/null 2>&1",
      "interval": "10s"
    }
  }
}

This setup allows other services to find your frontend simply by querying web-frontend.service.consul.

The Hidden Infrastructure Cost: I/O and Latency

This is where most projects fail. Microservices increase network traffic by an order of magnitude. A single user request might trigger ten internal RPC calls. If your underlying VPS has "noisy neighbors" stealing CPU cycles, or if the virtualization layer adds 5ms of latency to every packet, your application becomes sluggish.

Virtualization: KVM vs. OpenVZ

Many budget hosting providers in Europe still use OpenVZ. It’s container-based virtualization. It’s cheap, but it shares the host kernel. This is terrible for Docker. You cannot run specific kernel modules, and resource isolation is weak.

At CoolVDS, we only use KVM (Kernel-based Virtual Machine). It provides true hardware virtualization. Your RAM is yours. Your CPU cycles are yours. This is non-negotiable for running a cluster of microservices.

Feature OpenVZ (Budget VPS) KVM (CoolVDS)
Docker Support Limited / unstable Native / Full
Kernel Access Shared Dedicated
Isolation Poor (Noisy Neighbors) High

3. The Circuit Breaker Pattern

What happens when the database slows down? In a monolith, the whole app hangs. In microservices, the failure cascades. If Service A calls Service B, and Service B is slow, Service A's threads get blocked waiting. Eventually, Service A dies too.

You need a Circuit Breaker. If a service fails repeatedly, the breaker trips and fails fast without waiting for a timeout. If you are in the Java ecosystem (Spring Boot), Netflix Hystrix is the standard.

However, you can implement a basic version even in Nginx using max_fails and fail_timeout (as seen in the config above). If the backend fails 3 times, Nginx stops sending traffic for 30 seconds. This gives the service time to recover.

Data Sovereignty and Latency in Norway

With the invalidation of the Safe Harbor agreement last year, relying on US-hosted cloud providers has become a legal gray area for Norwegian companies. The upcoming GDPR regulations (adopted this April) will only make this stricter. Datatilsynet isn't playing around.

Hosting locally isn't just about compliance; it's about physics. If your users are in Oslo or Bergen, routing traffic to Frankfurt or Amsterdam adds unnecessary milliseconds. CoolVDS infrastructure is peered directly at NIX (Norwegian Internet Exchange). We consistently see sub-5ms latency to major Norwegian ISPs.

Optimizing for NVMe I/O

Microservices often rely on log shipping (ELK stack) and heavy database reads. Standard spinning HDDs or even cheap SATA SSDs become a bottleneck. We have started rolling out NVMe storage tiers because the queue depth handling is vastly superior. When you have 20 containers writing logs simultaneously, SATA controllers choke.

Check your disk I/O scheduler inside your VM. For virtualized SSD/NVMe, we recommend noop or deadline over cfq.

# Check current scheduler
cat /sys/block/vda/queue/scheduler
# [noop] deadline cfq

# Change to noop for lower latency on SSDs
echo noop > /sys/block/vda/queue/scheduler

Conclusion

Microservices solve organizational scaling problems, but they introduce technical complexity. You cannot solve that complexity with weak infrastructure.

You need:

  • Smart Routing: Nginx with keepalives.
  • Discovery: Consul to track dynamic IPs.
  • Raw Power: KVM virtualization to ensure Docker stability and NVMe storage to handle the I/O storm.

Don't let slow I/O or noisy neighbors kill your architecture. Deploy a test instance on CoolVDS today and see the difference a dedicated kernel makes.