Microservices in Production: A Survival Guide for Norwegian DevOps
Let’s be honest: the monolith is comfortable. It’s a single git repo, a single deployment pipeline, and if it breaks, you know exactly where to look. But in 2015, "comfortable" doesn't scale. We are seeing a massive shift across the Nordic tech scene—from startup hubs in Oslo to enterprise teams in Trondheim—moving toward decoupled architectures.
Everyone is talking about Docker right now (version 1.7 just dropped last month), but few are talking about the operational headache that follows docker run. When you explode one application into twelve different services, you aren't just writing code; you are architecting a distributed network. And networks are fragile.
I've spent the last six months migrating a high-traffic e-commerce platform from a Magento monolith to a Go-based microservices architecture. Here are the battle-tested patterns we used to keep the system stable, the latency low, and the data compliant with Norwegian law.
1. The API Gateway Pattern (Nginx is King)
The rookie mistake is letting your frontend client talk directly to your microservices. Don't do it. It exposes your internal topology and creates a CORS nightmare. You need a guard at the gate.
We use Nginx as a reverse proxy/API Gateway. It handles SSL termination, request logging, and routing. This offloads heavy lifting from your application containers. In your nginx.conf, standardizing your upstream blocks is critical for performance:
upstream inventory_service {
server 10.0.0.5:8080;
server 10.0.0.6:8080;
keepalive 64;
}
server {
location /api/inventory {
proxy_pass http://inventory_service;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
}
}
Pro Tip: Notice the keepalive 64; directive? Without it, Nginx opens a new TCP connection for every request to your backend. In a microservices environment, that TCP handshake overhead will kill your latency. Keep those connections open.
2. Service Discovery (Consul vs. Etcd)
In the old days, we hardcoded IP addresses in /etc/hosts. In a containerized world where services spin up and die in seconds, that’s impossible. You need Service Discovery.
We evaluated both Etcd and Consul. While Etcd is great (especially with the CoreOS hype), Consul won us over because of its built-in DNS interface. It allows our services to communicate simply by querying inventory.service.consul rather than managing complex config files.
However, running a Consul cluster requires stable, low-latency networking. This is where your infrastructure choice matters. If you are running this on noisy public clouds with "burstable" CPU credits, your consensus protocol (Raft) will time out, and your cluster will lose leadership. We deploy our Consul nodes on CoolVDS KVM instances because the dedicated resources ensure the CPU consistency required for maintaining cluster state.
3. The "Data Sovereignty" Pattern
Here in Norway, we don't just care about uptime; we care about Datatilsynet (The Data Inspectorate). With the scrutiny on the US Safe Harbor agreement intensifying post-Snowden, storing customer data on US-controlled servers is a legal minefield.
The pattern here is Compute-Storage Separation. You might run your stateless application containers on elastic nodes, but your state (databases, user logs) must sit on secure, compliant storage within national borders.
For our setup, we keep the MySQL databases on high-performance NVMe storage located physically in Oslo. This guarantees two things:
- Compliance: Data never leaves Norway, satisfying strict interpretation of the Personal Data Act (Personopplysningsloven).
- Speed: Local peering via NIX (Norwegian Internet Exchange) means our latency to Norwegian users is under 5ms.
The Database Bottleneck
A common anti-pattern in 2015 is trying to run the database inside a Docker container. Don't do this for production. Docker volumes are still maturing, and the I/O overhead can be unpredictable.
Stick to a dedicated VPS for your database. Tune your my.cnf to leverage the NVMe speeds available on modern hosting platforms:
[mysqld]
innodb_buffer_pool_size = 4G
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000
Conclusion: Infrastructure is the Foundation
Microservices solve the spaghetti code problem, but they introduce the "network spaghetti" problem. To succeed, you need to be obsessive about your underlying infrastructure. You need raw CPU performance for serialization/deserialization, and you need stable I/O for your service registry.
Whether you are using Ansible, Chef, or experimenting with this new "Kubernetes" thing Google just released v1.0 of, your code is only as good as the server it runs on. For projects that require low latency in the Nordics and strict data compliance, we rely on CoolVDS to provide the metal that makes microservices viable.
Ready to break the monolith? spin up a CoolVDS instance with SSD caching today and test your Docker cluster with 1ms latency to Oslo.