Console Login

Microservices Architecture: Stop Building Distributed Monoliths on Slow Hardware

Microservices Architecture: Stop Building Distributed Monoliths on Slow Hardware

Everyone wants to be Netflix. But if you are reading this, you probably don't have Netflix's budget, and you certainly don't have their engineering headcount. I've seen too many Norwegian startups break a perfectly functional Laravel or Django monolith into twenty fragment services, only to realize they've just traded function calls (nanoseconds) for HTTP requests (milliseconds). The result? A distributed monolith that is harder to debug and slower for the end-user.

It is June 2021. The container ecosystem has matured. Kubernetes v1.21 is stable. Yet, the fundamental laws of physics haven't changed: latency is the enemy. If your microservices architecture ignores infrastructure reality, you are building a house of cards.

The "API Gateway" Pattern: Your First Line of Defense

Exposing every microservice directly to the public internet is a security suicide mission. You need a gatekeeper. In 2021, while tools like Kong or Traefik are popular, good old NGINX remains the undisputed king of performance per watt.

In a recent project migrating a booking platform in Oslo, we faced a "thundering herd" problem. The frontend was hammering the `inventory-service` and `pricing-service` simultaneously. By implementing an API Gateway, we could aggregate these requests. But here is the catch: if your Gateway is on a VPS with noisy neighbors, the context switching overhead will kill your throughput.

Here is a production-ready NGINX configuration snippet for an API Gateway handling upstream keepalives. Most people forget the `keepalive` directive, forcing a new TCP handshake for every internal request. That is amateur hour.

upstream backend_inventory {
    server 10.10.0.5:8080;
    server 10.10.0.6:8080;
    keepalive 64;
}

server {
    listen 80;
    server_name api.cool-client.no;

    location /api/inventory {
        proxy_pass http://backend_inventory;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_next_upstream error timeout http_500;
    }
}

The "Sidecar" Pattern: Don't Let Developers Write Network Code

Developers should write business logic, not retry logic. If every service needs to implement its own circuit breaking, rate limiting, and mTLS, you will end up with a mismatched mess of libraries.

This is where the Sidecar pattern (popularized by Kubernetes) comes in. You attach a proxy container to your main application container. The app talks to localhost, and the proxy handles the scary network stuff. In 2021, Istio is the heavyweight here, but for many deployments on CoolVDS, we see Linkerd being a lighter, faster alternative.

Pro Tip: If you are running Kubernetes on bare metal or KVM instances, ensure your MTU settings match the underlying network. Standard Ethernet is 1500, but some overlay networks (VXLAN/Flannel) add overhead. A mismatched MTU causes packet fragmentation and random timeouts that are a nightmare to debug.

Infrastructure Matters: The Hardware Reality

Microservices are chatty. Service A calls Service B, which calls Service C. If Service A is on a host in Oslo and Service B is on a host in Frankfurt, your latency floor is physically limited by the speed of light in fiber (~20ms round trip). Accumulate that over five nested calls, and your page load time is gone.

This is why location matters. For Norwegian businesses, hosting your cluster within Norway (like on CoolVDS infrastructure) isn't just about patriotism—it's about physics. Connecting to NIX (Norwegian Internet Exchange) directly reduces hops.

Optimizing Linux for Microservices

Out-of-the-box Linux kernels are tuned for general-purpose computing, not high-frequency microservice communication. You need to tweak `sysctl.conf` to handle the massive number of ephemeral ports and connections.

Run this on your nodes:

# Allow more local ports for heavy outgoing connections
net.ipv4.ip_local_port_range = 1024 65535

# Reuse sockets in TIME_WAIT state for new connections
net.ipv4.tcp_tw_reuse = 1

# Increase max open files (critical for NGINX/proxies)
fs.file-max = 2097152

Check your current settings with:

sysctl -p

Compliance: The Schrems II Headache

Since the Schrems II ruling last year (2020), transferring personal data to US-owned cloud providers has become a legal minefield for European companies. The Datatilsynet (Norwegian Data Protection Authority) is watching closely.

If your microservices architecture relies on managed proprietary services from the US giants, you are creating a compliance risk. Building on agnostic KVM-based VPS instances provided by a European host like CoolVDS gives you full control. You own the stack. You know where the bits live.

Service Discovery: Where is everything?

Hardcoding IP addresses in 2021 is a fireable offense. Services die and respawn with new IPs. You need dynamic service discovery.

If you aren't using K8s DNS, HashiCorp's Consul is the standard. Here is a simple check definition to ensure you aren't routing traffic to a zombie service:

{
  "check": {
    "name": "Inventory Health Check",
    "http": "http://localhost:5000/health",
    "interval": "10s",
    "timeout": "1s"
  }
}

Database Patterns: The Shared Data Trap

The worst anti-pattern I see? Five microservices reading from the same `users` table in a single MySQL database. You have coupled them at the data layer. If you change the schema, everything breaks.

Each service needs its own datastore. This sounds expensive, but it doesn't mean a dedicated physical server for each. With CoolVDS NVMe instances, the I/O throughput is high enough that you can run multiple database instances (e.g., PostgreSQL schemas or separate Redis processes) on the same node without I/O wait choking the CPU.

Testing Latency Yourself

Don't take my word for it. Spin up two instances and test the internal network throughput. We use `iperf3` for this.

Server A (Receiver):

iperf3 -s

Server B (Sender):

iperf3 -c 10.10.0.5 -t 30

If you aren't seeing near-line speed, your virtualization overhead is too high. This is why we stick to KVM at CoolVDS—container-based virtualization (like OpenVZ) often introduces unpredictable jitter under load.

Conclusion

Microservices solve organizational scaling problems, but they create technical scaling problems. To succeed in 2021, you need strict patterns (Gateway, Sidecar), robust Linux tuning, and infrastructure that guarantees low latency and data sovereignty.

Don't let I/O wait times kill your distributed architecture. Deploy a high-performance KVM instance on CoolVDS today and keep your packets inside Norway.