Console Login

Microservices in Production: Surviving the Move from Monolith to Docker on Bare Metal

Microservices in Production: Surviving the Move from Monolith to Docker on Bare Metal

The monolith is a sinking ship. I recall a deployment last Tuesday for a major e-commerce client in Oslo. Their Java application took 14 minutes to compile and another 8 to startup. A single typo in the cart module brought down the entire user management system. This is dependency hell, and in 2016, we simply cannot afford it anymore.

We are all reading Martin Fowler's papers and watching Netflix's engineering talks. The promise of microservices is seductive: decouple your logic, scale components independently, and deploy ten times a day. But nobody talks about the infrastructure tax. When you turn one application into twenty, you don't just multiply your complexity; you exponentiate your I/O wait times and network chatter.

If you are building distributed systems targeting the Norwegian market, you need to look beyond the code. This guide details the architectural patterns we are using right now—March 2016—to run high-performance microservices, utilizing Docker 1.10, Nginx, and solid infrastructure.

The Gateway Pattern: Stop Exposing Your Services

The biggest mistake I see dev teams make is exposing every microservice directly to the public interface. This is a security nightmare. Your front-end should not know that your inventory-service runs on port 8081 and your billing-service on 8082.

You need an API Gateway. In our stack, Nginx is the undisputed king here. It handles SSL termination, request routing, and basic load balancing, shielding your internal network. Here is a production-ready configuration block we use to route traffic based on URI paths.

upstream inventory_backend {
    server 10.0.0.15:8081;
    server 10.0.0.16:8081;
    keepalive 64;
}

upstream billing_backend {
    server 10.0.0.20:8082;
}

server {
    listen 80;
    server_name api.coolvds-client.no;

    # Security: Don't broadcast nginx version
    server_tokens off;

    location /api/v1/inventory {
        proxy_pass http://inventory_backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # Tuning for latency
        proxy_read_timeout 5s;
    }

    location /api/v1/billing {
        proxy_pass http://billing_backend;
    }
}

By keeping the keepalive connections open, we reduce the TCP handshake overhead between the gateway and the microservices. In a high-traffic environment, this saves milliseconds, and milliseconds equal revenue.

Service Discovery: The End of Hardcoded IPs

In a monolithic setup, your database is at localhost:3306. Simple. In a microservices architecture, services move. Containers die and respawn with new IPs. If you hardcode IP addresses in 2016, you have built a fragile system.

We use Consul by HashiCorp for service discovery. It is lightweight and DNS-friendly. When a service comes online, it registers itself. When it dies, Consul removes it. Other services query Consul to find out where to send requests.

Here is how you register a service via the HTTP API. Most of our automated scripts run this via curl inside the container entrypoint:

curl -X PUT -d '{ 
  "ID": "inventory-1",
  "Name": "inventory",
  "Address": "10.0.0.15",
  "Port": 8081,
  "Check": {
    "HTTP": "http://10.0.0.15:8081/health",
    "Interval": "10s"
  }
}' http://localhost:8500/v1/agent/service/register

This adds a health check. If the inventory service hangs, Consul stops routing traffic to it within 10 seconds. This is self-healing infrastructure.

The Storage Bottleneck: Why HDD Kills Microservices

This is where the theory meets the metal. Splitting a monolith often means splitting the database. Instead of one large MySQL instance, you might have MongoDB for the catalog, PostgreSQL for billing, and Redis for sessions.

Consider the I/O impact. You have moved from sequential reads on one file to random reads across twenty different data stores. If your VPS provider hosts you on standard spinning hard drives (HDD) or oversold SSDs, your system will crawl. The iowait metric will skyrocket, and your CPU will sit idle waiting for disk operations to finish.

Pro Tip: Check your disk latency. Run ioping -c 10 . on your current server. If your average latency is above 1ms, your storage is too slow for a microservices database cluster.

This is why we architect exclusively on NVMe storage at CoolVDS. NVMe (Non-Volatile Memory Express) interacts directly with the PCIe bus, bypassing the legacy SATA bottleneck. In our benchmarks, an NVMe drive handles the chaotic random I/O of multiple containerized databases with practically zero latency.

Container Orchestration with Docker Compose

While tools like Kubernetes are emerging (v1.2 is looking promising), for many teams, they are overkill right now. Docker Compose is the pragmatic choice for defining multi-container environments today. With the release of Docker Compose 1.6 and the new version 2 file format, we can finally define networks clearly.

Here is a robust docker-compose.yml setup defining a private backend network:

version: '2'

services:
  redis:
    image: redis:3.0
    networks:
      - backend
    command: redis-server --appendonly yes

  web:
    build: .
    ports:
      - "80:5000"
    networks:
      - backend
    depends_on:
      - redis
    environment:
      - REDIS_HOST=redis

networks:
  backend:
    driver: bridge

Notice the networks key. This ensures isolation. The Redis container is not accessible from the outside world, only from the web service. This mirrors the security zones we used to build physically in data centers.

The "Safe Harbor" Reality Check

We cannot ignore the legal landscape. Last October, the European Court of Justice invalidated the Safe Harbor agreement. If you are a Norwegian business storing customer data, relying on US-based cloud giants is now a massive legal gray area. The Data Protection Authority (Datatilsynet) is clear: you are responsible for where your data lives.

Hosting locally isn't just about latency—though pinging 2ms to NIX (Norwegian Internet Exchange) is fantastic for user experience. It is about data sovereignty. When you deploy on CoolVDS, your data sits in Oslo, governed by Norwegian law, not in a replicable bucket that might sync to Virginia.

Performance Tuning for Linux (Ubuntu 14.04/16.04)

Before you launch, you must tune the host kernel. The default Linux settings are not designed for the thousands of concurrent connections a microservice mesh generates. Edit your /etc/sysctl.conf:

# Increase system file descriptor limit
fs.file-max = 100000

# Increase the port range for outgoing connections
net.ipv4.ip_local_port_range = 1024 65000

# Reuse closed sockets faster (TIME_WAIT state)
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

Apply these with sysctl -p. Without these, your API gateway will run out of ephemeral ports during load spikes, resulting in 502 Bad Gateway errors even if your CPU load is low.

The Verdict

Microservices are not a magic bullet; they are a trade-off. You trade code complexity for operational complexity. To win this trade, your foundation must be rock solid. You need strict isolation (KVM), blazing fast I/O (NVMe), and low network latency.

Do not let your infrastructure be the reason your architecture fails. If you are ready to build a system that respects both physics and data privacy, we are ready to host it.

Stop fighting I/O wait. Deploy a high-performance NVMe instance on CoolVDS today and see the difference.