Breaking the Monolith Without Breaking Production: Proven Microservices Patterns for 2019
I still wake up in a cold sweat thinking about a deployment I managed three years ago for a major Nordic retailer. We were running a 4GB WAR file on a Tomcat cluster. One bad SQL query in the inventory module locked the database tables, dragging the checkout, user login, and even the static homepage down with it. The site was down for four hours on Black Friday.
That is the reality of the monolith. It is a single point of failure masquerading as stability.
By now, everyone in Oslo tech circles is talking about microservices. You have read the Netflix papers. You have seen the Martin Fowler blog posts. But when you actually try to split that PHP or Java monolith into twenty discrete services, you realize the network is reliable until it isn't. Latency kills. Data consistency becomes a nightmare.
In this post, I am stripping away the marketing fluff. We are looking at three architectural patterns you need to implement today to run microservices safely, specifically focusing on the stack available to us in mid-2019: Kubernetes 1.14, Nginx, and Consul.
1. The API Gateway: Your First Line of Defense
Never expose your internal microservices directly to the public internet. Just don't. It is a security suicide mission and a nightmare for SSL termination.
Your clients (mobile apps, SPAs) should talk to one entry point. In 2019, while Envoy is gaining traction, Nginx remains the undisputed king of the edge. It is battle-hardened, and if you configure it correctly, it adds negligible overhead.
Here is a production-ready configuration pattern we use to route traffic to different backend services while stripping internal headers. This prevents external users from seeing your topology.
http {
upstream user_service {
server 10.0.0.5:8080;
server 10.0.0.6:8080;
keepalive 64;
}
upstream order_service {
server 10.0.0.7:9090;
server 10.0.0.8:9090;
keepalive 64;
}
server {
listen 443 ssl http2;
server_name api.norway-shop.com;
# SSL optimizations for lower latency
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location /users/ {
proxy_pass http://user_service;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Connection "";
proxy_http_version 1.1;
# Hide internal headers
proxy_hide_header X-Powered-By;
}
location /orders/ {
proxy_pass http://order_service;
}
}
}
Pro Tip: Notice the keepalive 64; directive in the upstream block? Without it, Nginx opens and closes a new TCP connection for every request to your backend. That TCP handshake overhead adds milliseconds that stack up fast.
2. Service Discovery: No Hardcoded IPs
In a CoolVDS environment or any dynamic cloud, servers die. They get rebooted. New instances spin up to handle load. If you hardcode 10.0.0.5 in your config, you are building a fragile system.
If you are using Kubernetes (we are seeing huge adoption of K8s 1.13/1.14 in Oslo recently), this is handled by Kube-DNS (or CoreDNS). But if you are running on raw VPS instances for performance, you need Consul.
Here is how a service registers itself via a simple JSON payload in Consul:
{
"ID": "order-service-1",
"Name": "order-service",
"Tags": [
"primary",
"v1"
],
"Address": "10.0.0.7",
"Port": 9090,
"Check": {
"DeregisterCriticalServiceAfter": "90m",
"HTTP": "http://10.0.0.7:9090/health",
"Interval": "10s"
}
}
The Latency Factor: Service discovery introduces a network hop. If your VPS is hosting in Germany but your users are in Bergen, that round-trip adds up. This is why local peering matters. At CoolVDS, our internal network latency is optimized to be sub-millisecond between instances in the same datacenter.
3. The Circuit Breaker Pattern
This is the one most developers skip, and it is the reason one failed service brings down the whole platform. If the Order Service calls the Inventory Service, and the Inventory database is locked, the Order Service waits. And waits. Eventually, all threads in the Order Service are blocked waiting for Inventory.
You need to fail fast. If the Inventory Service is slow, stop calling it and return a default error or cached data immediately.
If you are in the Java ecosystem, Hystrix is the standard (though maintenance mode was announced recently, it is still the bedrock of 2019 stacks). For others, implement a simple timeout strategy in your code.
Here is a Python example using a simple decorator pattern to simulate a circuit breaker logic:
import time
from functools import wraps
class CircuitBreaker:
def __init__(self, exceptions, threshold=5, delay=60):
self.exceptions = exceptions
self.threshold = threshold
self.delay = delay
self.failures = 0
self.last_failure_time = None
def __call__(self, func):
@wraps(func)
def wrapper(*args, **kwargs):
if self.failures >= self.threshold:
if time.time() - self.last_failure_time > self.delay:
self.failures = 0 # Reset (Half-Open state simulation)
else:
raise Exception("Circuit is OPEN. Call rejected.")
try:
return func(*args, **kwargs)
except self.exceptions:
self.failures += 1
self.last_failure_time = time.time()
raise
return wrapper
# Usage
@CircuitBreaker(exceptions=(ConnectionError, TimeoutError))
def call_inventory_service(sku):
# Network call logic here
pass
Infrastructure Matters: The Noise Problem
Microservices generate a massive amount of internal traffic (East-West traffic) and logging I/O. Every request hits the Gateway, then Service A, then Service B, then the Database, and logs are written at every step.
On budget VPS providers using OpenVZ or older shared kernels, you suffer from "noisy neighbor" syndrome. If another customer on the host runs a heavy backup script, your I/O wait times spike, and your microservices start timing out. This causes cascading failures.
This is where CoolVDS takes a different stance. We use KVM (Kernel-based Virtual Machine) virtualization exclusively. This provides true hardware isolation. Furthermore, our storage backend is purely NVMe.
IOPS Benchmark: HDD vs SSD vs NVMe (2019 Average)
| Storage Type | Random Read IOPS | Latency |
|---|---|---|
| SATA HDD (7200 RPM) | ~80 - 120 | 10-15 ms |
| Standard SSD (SATA) | ~5,000 - 10,000 | 0.5 ms |
| CoolVDS NVMe | ~20,000+ | 0.05 ms |
When you have twenty microservices all trying to write logs simultaneously, that difference between 0.5ms and 0.05ms is the difference between a smooth UI and a 504 Gateway Timeout.
The Compliance Angle: Datatilsynet & GDPR
Since GDPR came into full force last year, data residency is critical. If you are serving Norwegian customers, storing personal data (PII) on US-controlled servers adds a layer of legal complexity regarding the Privacy Shield framework.
Hosting your microservices cluster on CoolVDS ensures your data stays within the EEA, simplifying your compliance documentation for Datatilsynet audits. We control our hardware stack, meaning we know exactly where your bits physically reside.
Deploying Your First Cluster
To get started, you don't need a massive budget. A basic microservices setup requires at least three nodes to simulate a distributed environment effectively:
- Gateway Node: Runs Nginx/Traefik (2 vCPU, 4GB RAM).
- Worker Node A: Runs App Services (4 vCPU, 8GB RAM).
- Worker Node B: Runs Database/Consul (4 vCPU, 8GB RAM).
You can spin this infrastructure up using our API in under 55 seconds. Don't let IO wait times kill your architecture before you even launch.
Ready to test your architecture? Deploy a high-performance NVMe instance on CoolVDS today and see how fast your microservices can really fly.