Microservices in Production: 3 Patterns That Save You From Distributed Hell
Let's be honest: migrating from a monolith to microservices usually trades one set of problems for a much more complex set of problems. In the last year, I’ve watched too many engineering teams in Oslo dismantle a perfectly functional PHP monolith only to build a "distributed monolith" composed of 50 Java services that all fail simultaneously when the network sneezes.
If you are deploying microservices in 2019 without a strategy for failure, you aren't an architect; you're an arsonist. The network is unreliable. Latency is inevitable. Hardware fails. When you split your application into twenty pieces, you increase your failure surface area by twenty-fold.
I'm going to walk you through the three architectural patterns that actually matter for keeping the lights on, based on real deployment scars. And I'll show you why the underlying metal—specifically your I/O performance—matters more than your choice of container orchestrator.
1. The API Gateway: Stop Exposing Your Underbelly
I still see developers exposing microservice ports directly to the public internet. This is madness. Your frontend or mobile app should never know that service-inventory lives on port 8082 or that service-billing is on a different subnet. If you change your internal topology, you break every client.
You need an API Gateway. It acts as the single entry point, handling SSL termination, rate limiting, and routing. In 2019, NGINX is still the king here, though Kong is gaining ground. NGINX is boring, and boring is good for production.
Here is a battle-tested NGINX configuration snippet used to route traffic while preventing a slow backend from exhausting your worker processes. Note the use of proxy_read_timeout and upstream blocks.
http {
upstream backend_inventory {
# Load balancing with least connections strategy
least_conn;
server 10.10.0.15:8080 max_fails=3 fail_timeout=30s;
server 10.10.0.16:8080 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 443 ssl http2;
server_name api.coolvds-client.no;
# SSL optimizations for lower latency
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location /inventory/ {
proxy_pass http://backend_inventory;
proxy_http_version 1.1;
proxy_set_header Connection "";
# Fail fast if the backend is stalling
proxy_connect_timeout 5s;
proxy_send_timeout 5s;
proxy_read_timeout 10s;
}
}
}
Pro Tip: Never let a client wait 60 seconds for a timeout. If your Inventory Service can't answer in 10 seconds, it's dead. Cut the connection and save your thread pool.
2. The Circuit Breaker: Failing Gracefully
In a monolithic architecture, a slow database query slows down the page load. In microservices, a slow database query in Service A causes Service B (which calls A) to hang, which causes Service C (which calls B) to hang. This is cascading failure. Before you know it, your entire cluster is unresponsive because of one bad SQL join.
You need a Circuit Breaker. This pattern detects when a downstream service is failing and "opens the circuit," returning an immediate error (or cached data) instead of waiting for a timeout. This gives the failing service time to recover.
While Netflix Hystrix has been the standard for years, it recently went into maintenance mode. For Go-based microservices (which we are seeing more of here in the Nordic tech scene), libraries like gobreaker are essential. Here is how you implement a basic breaker in Go:
var cb *gobreaker.CircuitBreaker
func init() {
var st gobreaker.Settings
st.Name = "InventoryGET"
st.MaxRequests = 5 // Half-open max requests
st.Interval = 60 * time.Second // Clear counts interval
st.Timeout = 30 * time.Second // Open state duration
// Trip the breaker if more than 3 requests fail consecutively
st.ReadyToTrip = func(counts gobreaker.Counts) bool {
return counts.ConsecutiveFailures > 3
}
cb = gobreaker.NewCircuitBreaker(st)
}
func GetInventory(id string) (string, error) {
body, err := cb.Execute(func() (interface{}, error) {
resp, err := http.Get("http://inventory-service/items/" + id)
if err != nil {
return nil, err
}
defer resp.Body.Close()
// Check for 500 errors to count as failure
if resp.StatusCode >= 500 {
return nil, fmt.Errorf("server error: %d", resp.StatusCode)
}
return ioutil.ReadAll(resp.Body)
})
if err != nil {
return "", err // Circuit is open or request failed
}
return string(body.([]byte)), nil
}
3. Infrastructure Awareness: The "Noisy Neighbor" Problem
This is where most "cloud-native" tutorials fail you. They assume hardware is an abstract concept. It isn't. When you run 50 containers on a single host, the random I/O generated by logging, database writes, and caching layers is massive.
If you are hosting on budget shared platforms, you are fighting for IOPS with the teenager next door running a Minecraft server. When their disk usage spikes, your microservice latency spikes. In a microservice chain, a 50ms delay in one service can compound into a 500ms delay for the user.
This is why at CoolVDS, we strictly enforce KVM virtualization and use NVMe storage arrays. KVM ensures your CPU and RAM are physically reserved, not just "promised." Containers are great for deployment, but for isolation, you want a hypervisor.
The Norway Advantage (Data & Latency)
For those of us operating in Europe, the regulatory landscape changed permanently last year with GDPR. Hosting outside the EEA is becoming a legal minefield. Datatilsynet (The Norwegian Data Protection Authority) is not lenient.
Furthermore, if your user base is in Scandinavia, physics matters. Hosting in a US-East data center adds ~90ms of latency. Hosting in Oslo via CoolVDS cuts that to <5ms. When your frontend makes 10 sequential API calls to render a dashboard, that latency difference is the gap between "snappy" and "unusable."
Deployment Manifest: Kubernetes (v1.13 Style)
Finally, how do we ship this? In early 2019, Kubernetes is the undisputed winner of the orchestration wars. Here is a standard deployment manifest ensuring resource limits are set. Never deploy a pod without requests and limits, or the Linux OOMKiller will hunt you down.
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-service
labels:
app: payment
spec:
replicas: 3
selector:
matchLabels:
app: payment
template:
metadata:
labels:
app: payment
spec:
containers:
- name: payment-app
image: registry.coolvds.com/payment:v1.4.2
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
env:
- name: DB_HOST
value: "db-cluster-01.internal"
The Verdict
Microservices aren't a magic bullet; they are a trade-off. You trade code complexity for operational complexity. To win this trade, you need rigorous patterns (Gateways, Circuit Breakers) and robust hardware.
Don't let cheap, spinning-disk VPS hosting be the bottleneck that breaks your architecture. You need consistent I/O and low latency to make distributed systems work.
Ready to build a cluster that doesn't sleep when you do? Deploy a high-performance NVMe KVM instance on CoolVDS today. Your uptime monitoring will thank you.