Microservices Architecture in 2018: Survival Guide for Nordic DevOps
The monolith is choking your deployment pipeline. I know the feeling. You push a minor CSS fix, and suddenly the checkout service hangs because they share the same JDBC connection pool. It is May 2018, and if you are still manually FTPing JAR files to a single Tomcat instance, you represent a liability.
But let's be honest: migrating to microservices is trading one set of problems for another. You trade code complexity for operational complexity. Instead of one broken server, you now have fifty services failing in unison because of a network partition.
With the GDPR enforcement deadline of May 25th just weeks away, the stakes are higher. If your user data is scattered across opaque containers with no clear audit trail or geographical boundary, the Norwegian Datatilsynet will not be amused. Here is how to architect microservices properly using the tools we have right now, ensuring performance does not tank while keeping your data strictly on Norwegian soil.
1. The API Gateway: Your First Line of Defense
Direct client-to-microservice communication is a disaster. It exposes your internal topology and makes SSL termination a nightmare. You need an API Gateway. It acts as the single entry point, handling routing, rate limiting, and authentication.
In 2018, while Netflix Zuul is popular in the Java ecosystem, Nginx remains the king of performance for the rest of us. It is lightweight, battle-tested, and doesn't eat RAM like a JVM process.
Configuration Pattern: The Reverse Proxy
Here is a production-ready snippet for nginx.conf to route traffic to dynamic upstreams. Note the proxy_next_upstream directive—this is crucial for failover.
http {
upstream user_service {
server 10.0.1.5:8080 max_fails=3 fail_timeout=30s;
server 10.0.1.6:8080 max_fails=3 fail_timeout=30s;
keepalive 64;
}
upstream order_service {
server 10.0.2.5:8080;
server 10.0.2.6:8080;
}
server {
listen 443 ssl http2;
server_name api.yourdomain.no;
# SSL optimizations for low latency
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location /users/ {
proxy_pass http://user_service;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
# Don't let a timeout kill the UX
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
}
}
}
Pro Tip: Enabling HTTP/2 (available since Nginx 1.9.5) significantly reduces latency for mobile clients on 4G networks across Oslo and Bergen. Do not skip this.
2. Service Discovery: Hardcoding IPs is Suicide
In a containerized environment (Docker 18.03), containers die and respawn with new IPs. If you hardcode 192.168.1.50 in your config, you will be woken up at 3 AM.
We use HashiCorp Consul. Unlike Eureka, it’s language-agnostic and uses DNS. Your application just queries user-service.service.consul, and it resolves to the healthy container IPs.
Here is a standard Consul agent configuration for a worker node:
{
"datacenter": "oslo-dc1",
"data_dir": "/opt/consul",
"log_level": "INFO",
"node_name": "worker-01",
"server": false,
"retry_join": ["10.0.0.1", "10.0.0.2", "10.0.0.3"],
"bind_addr": "10.0.0.5",
"enable_script_checks": true
}
3. The Infrastructure Bottleneck: I/O Wait
This is where most "cloud" providers lie to you. They sell you vCPUs, but they throttle your Disk I/O. In a monolith, you have one database connection. In microservices, you might have 12 containers logging to disk simultaneously, plus a Kafka broker, plus three different database instances (Postgres, Mongo, Redis).
If you run this on standard HDD or even cheap SATA SSD VPS hosting, your iowait will spike, and your CPU will sit idle waiting for disk operations to finish. I've seen 'high performance' clusters grind to a halt because the underlying storage couldn't handle the random write patterns of distributed logging.
This is why at CoolVDS we standardized on NVMe storage and KVM virtualization. OpenVZ (container-based virtualization) often allows neighbors to steal your I/O cycles. KVM provides the hardware isolation necessary for Docker heavy lifting.
Kernel Tuning for Microservices
Default Linux kernel settings are tuned for long-lived connections, not the thousands of ephemeral connections microservices generate. You need to tweak /etc/sysctl.conf to avoid running out of ephemeral ports.
# Allow reuse of sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1
# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65000
# Increase max open files (critical for DBs and Nginx)
fs.file-max = 2097152
# Increase backlog for high connection bursts
net.core.somaxconn = 65535
Apply these with sysctl -p. Without this, your API Gateway will start dropping connections under load.
4. The "Circuit Breaker": Failing Gracefully
If your Pricing Service goes down, your Product Page should not return a 500 error. It should show the product without the price (or a cached price). This is the Circuit Breaker pattern.
Netflix Hystrix is the standard here. If you are running Java/Spring Boot 2.0, it's as simple as an annotation:
@Service
public class PricingService {
@HystrixCommand(fallbackMethod = "getCachedPrice")
public BigDecimal getPrice(String productId) {
// Call external microservice
return restTemplate.getForObject("http://pricing-service/" + productId, BigDecimal.class);
}
public BigDecimal getCachedPrice(String productId) {
// Return stale data or 0.00 to keep the page alive
return redisCache.get(productId);
}
}
5. The GDPR Angle: Data Sovereignty
With GDPR effective May 25th, latency isn't the only reason to host locally. If you store PII (Personally Identifiable Information), you must know exactly where that data lives. Using a US-based cloud provider's "European Zone" can be legally gray depending on how they replicate backups.
Hosting on CoolVDS in Norway ensures your data stays within the jurisdiction of Norwegian law and the EEA framework. Plus, the latency from our Oslo data center to local ISPs (Telenor, Telia) is typically under 3ms. Speed is a feature, but compliance is a requirement.
Summary
Microservices require more than just breaking up code. They require a robust orchestration layer and, crucially, infrastructure that doesn't buckle under high I/O concurrency. Don't build a Ferrari engine and put it in a tractor.
Stop fighting noisy neighbors and slow disks. Deploy your Docker swarm on a CoolVDS NVMe instance today. We provide the raw KVM power; you provide the code.