Microservices in Production: A Survival Guide for Norwegian DevOps
Let’s be honest for a minute. Breaking a monolith into microservices doesn't inherently make your application better; it just swaps compile-time dependencies for runtime failures. If you are reading this in July 2016, you are probably feeling the pressure to containerize everything because Netflix did it. But you are not Netflix. You likely don't have an army of chaos engineers.
I have spent the last six months migrating a high-traffic e-commerce platform in Oslo from a single LAMP stack to a distributed architecture. I've seen latency spikes that defy logic and race conditions that only happen at 3:00 AM. The lesson? Infrastructure is not an abstraction you can ignore.
Here is the reality of deploying microservices right now, and how to survive it without burning out your team.
1. The Network is the Computer (and the Network is Slow)
In a monolith, a function call takes nanoseconds. In a microservice architecture, that same interaction becomes an HTTP request over the wire. If you have five services in a call chain, and each has a 20ms overhead, you have just added 100ms of latency before you even process data. This is why hosting location matters.
If your users are in Norway, but your VPS is in Frankfurt or Amsterdam, you are adding 30-40ms of round-trip time (RTT) unnecessarily. For a microservices mesh, that latency compounds.
Pro Tip: Keep your compute close to your users. CoolVDS operates out of Norwegian data centers with direct peering to NIX (Norwegian Internet Exchange). We consistently see sub-5ms latency to major ISPs in Oslo. When your services are chatty, physics wins.
2. Service Discovery: Hardcoding IPs is a Death Sentence
Gone are the days of editing /etc/hosts. Containers die and respawn with new IPs. If you are manually configuring IP addresses in 2016, you are doing it wrong. We are currently using Consul for service discovery. It’s lighter than Etcd and comes with a built-in DNS interface.
Here is a basic Consul agent configuration we use to register a service on boot:
{
"service": {
"name": "order-processing",
"tags": ["v1", "backend"],
"port": 8080,
"check": {
"script": "curl localhost:8080/health",
"interval": "10s"
}
}
}
By running a local Consul agent on every CoolVDS instance, your applications can simply query order-processing.service.consul to find the upstream IP. No load balancer reconfiguration required.
3. The Gateway Pattern: Nginx as the Guard Dog
Do not expose your microservices directly to the public internet. It is a security nightmare. We use Nginx as an API Gateway. It terminates SSL, handles basic auth, and routes requests.
With the release of Nginx 1.9.x, we finally got TCP load balancing, but for most web APIs, the standard HTTP proxy is sufficient. Here is a production-hardened config snippet that handles timeouts—crucial when a backend service hangs:
upstream order_backend {
server 10.0.0.5:8080;
server 10.0.0.6:8080;
keepalive 64;
}
server {
listen 80;
server_name api.yoursite.no;
location /orders {
proxy_pass http://order_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
# Fail fast. If the service is slow, don't hang the gateway.
proxy_connect_timeout 5s;
proxy_read_timeout 10s;
proxy_send_timeout 10s;
}
}
4. The I/O Bottleneck nobody talks about
Microservices are noisy. They generate massive amounts of logs (Docker stdout/stderr), they constantly read/write to discovery agents, and if you are using a message broker like RabbitMQ or Kafka, your disk I/O is getting hammered.
On standard spinning rust (HDD) or even cheap SATA SSDs, your "iowait" will skyrocket. I've seen Docker daemons crash simply because the disk couldn't keep up with log rotation. This is where hardware selection becomes an architectural decision.
At CoolVDS, we standardized on NVMe storage for this exact reason. NVMe offers queue depths that SATA cannot touch. When you have 20 containers on a host all fighting for disk access, NVMe is the difference between a smooth operation and a cascading failure.
5. Container Orchestration: Docker Compose vs. The World
While everyone is talking about the new "Swarm Mode" in the upcoming Docker 1.12 (currently in RC), for many small-to-medium teams, Docker Compose (v2 syntax) is still the most pragmatic way to define a stack. It is readable and version controllable.
Don't overcomplicate it. If you run a single powerful VPS, Compose is enough. If you need multi-host, that's when you look at Swarm or Mesos.
version: '2'
services:
web:
image: nginx:alpine
ports:
- "80:80"
depends_on:
- app
app:
build: .
environment:
- DB_HOST=db
links:
- db
db:
image: mariadb:10.1
environment:
- MYSQL_ROOT_PASSWORD=secret
volumes:
- /mnt/nvme/data:/var/lib/mysql
Note the volume mapping. We map the database data to /mnt/nvme/data. Never let your database write inside the container's UnionFS; the performance penalty is massive.
6. Data Sovereignty and the "Safe Harbor" Void
Since the ECJ invalidated the Safe Harbor agreement last October, the legal landscape for Norwegian data is murky. The Privacy Shield framework is being discussed right now, but nothing is concrete yet. The Datatilsynet (Norwegian Data Protection Authority) is watching closely.
Why risk storing your customer data in US-controlled clouds? By hosting on CoolVDS, your data stays physically in Norway, governed by Norwegian law and the EEA agreement. For CTOs concerned about compliance, this is the safest bet in 2016.
7. Optimizing the Database for Microservices
The "Database per Service" pattern is great for decoupling, but it means you are running more database instances. Default MySQL configurations are designed for dedicated servers, not for being crammed into containers.
You must tune your my.cnf to respect the memory limits of your VPS. If you allocate 4GB RAM to a CoolVDS instance, don't give 3GB to the InnoDB buffer pool if you are also running Java services.
[mysqld]
# Optimize for containerized environments
skip-host-cache
skip-name-resolve
innodb_buffer_pool_size = 512M
innodb_log_file_size = 128M
max_connections = 100
# crucial for data integrity
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
Final Thoughts
Microservices solve organizational scaling problems, but they create technical infrastructure problems. You need automation, you need visibility, and arguably most importantly, you need raw performance headroom.
Shared hosting with "noisy neighbors" will kill a microservice architecture via CPU steal and I/O latency. You need dedicated resources.
Ready to build? Don't let slow hardware verify your fears about distributed systems. Spin up a KVM-based, NVMe-powered instance on CoolVDS today and see what your stack can actually do.