Microservices Architecture: Patterns for Performance and Compliance in a Post-Schrems II World
Letâs be honest: monolithic applications are comfortable. They are easy to deploy, easy to debug, and everything is in one place. But there comes a specific Tuesday afternoonâusually right before a major campaign launchâwhen that comfort turns into a nightmare. A single memory leak in the image processing module crashes the checkout system. The database locks up because the reporting module decided to run a full table scan.
I have spent the last decade debugging distributed systems across Europe, and if there is one thing I have learned, it is that microservices solve scalability problems by introducing network problems.
If you are engineering for the Norwegian market today, in late 2020, the landscape has shifted. It is no longer just about splitting up code; it is about latency and legality. With the recent Schrems II ruling invalidating the Privacy Shield, hosting your microservices on US-controlled clouds (even with servers in Frankfurt) is a compliance minefield. You need architecture that is fast, resilient, and legally legally sound.
1. The Strangler Pattern: Don't Rewrite, Re-route
The biggest mistake CTOs make is the "Big Bang" rewrite. They halt development for six months to rebuild everything in Go or Rust. This almost always fails. The Strangler Pattern is the only sane way to migrate. You gradually strip functionality from the monolith and route it to new microservices.
We use Nginx as the edge router here. It is battle-hardened and handles context switching faster than most application-layer gateways. Here is a production-ready configuration snippet I used recently to strangle a legacy PHP e-commerce site, routing only the /cart logic to a new Node.js service:
upstream legacy_monolith {
server 10.0.0.5:80;
keepalive 32;
}
upstream new_cart_service {
server 10.0.0.20:3000;
keepalive 32;
}
server {
listen 80;
server_name shop.example.no;
# The new microservice handles the cart
location /cart/ {
proxy_pass http://new_cart_service;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
# Critical: fail back to monolith if service is down
error_page 502 503 504 = @fallback;
}
# Everything else goes to the monolith
location / {
proxy_pass http://legacy_monolith;
}
location @fallback {
proxy_pass http://legacy_monolith;
}
}
Pro Tip: Notice the
error_pagedirective. If your new shiny microservice fails, Nginx transparently sends the traffic back to the monolith. This is how you sleep at night.
2. The Data Sovereignty Pattern (The Norwegian Context)
Since July 2020, the Datatilsynet (Norwegian Data Protection Authority) has been very clear: relying on US-based cloud providers for processing personal data of Norwegian citizens is risky. If you are building microservices that handle PII (Personally Identifiable Information), you need to control where that data lives physically.
Latency is the other side of this coin. If your users are in Oslo and your servers are in a generic "Europe-North" region that actually sits in Ireland, you are adding 30-40ms of round-trip time (RTT). In a microservices architecture where a single frontend request might trigger 10 internal service calls, that latency compounds.
The Math of Latency Cascades:
- Monolith: User -> Server (30ms) -> DB (0.5ms) -> Return. Total: ~60ms.
- Microservices (hosted far away): User -> Gateway (30ms) -> Auth Service (10ms) -> Inventory Service (10ms) -> Pricing Service (10ms) -> Return. Total: ~120ms+.
Hosting on CoolVDS in Norway reduces that initial hop to <5ms for local users. When you control the infrastructure, you can also use private networking (VLANs) between your VPS instances to ensure unencrypted traffic never hits the public internet, satisfying GDPR requirements for data minimization.
3. The "Database-per-Service" I/O Trap
A core tenet of microservices is that services should not share databases. The Inventory Service has its own PostgreSQL, and the User Service has its own MariaDB. This ensures loose coupling.
However, this creates a massive I/O bottleneck. Instead of one database engine optimizing writes to disk, you now have ten database engines fighting for IOPS (Input/Output Operations Per Second). On standard HDD or cheap SSD VPS hosting, this is where the architecture falls apart. You will see iowait spikes in top, and your APIs will time out.
You need NVMe. Period. Here is how you check your disk latency. If you see wait times over 10ms, your storage is too slow for microservices.
# Install ioping to test disk latency
sudo apt-get install ioping
# Run a latency test
ioping -c 10 .
# Expected Output on CoolVDS NVMe:
# 4 KiB from . (ext4 /dev/vda1): request=1 time=225 us
# 4 KiB from . (ext4 /dev/vda1): request=2 time=180 us
# ...
# min/avg/max/mdev = 165 us / 205 us / 310 us / 45 us
If your current host is giving you milliseconds (ms) instead of microseconds (us), you are bottlenecking your architecture before you even write a line of code.
4. Resilience: The Circuit Breaker
In 2020, we can't assume networks are reliable. If the Payment Service is slow, it shouldn't take down the Order Service. We implement the Circuit Breaker pattern. If a service fails repeatedly, the breaker "trips" and returns an immediate error (or cached data) instead of waiting for a timeout.
Implementing this in code is fine, but implementing it in your infrastructure is better. If you are using Docker Swarm or Kubernetes (v1.18+), you can define these limits. However, for a pure VPS approach, we often use HAProxy for this logic.
backend payment_service
server payment1 10.0.0.30:8080 check inter 2s rise 2 fall 3
# If 50% of requests are errors over 10 seconds, mark as down
stick-table type ip size 1m expire 10s store http_err_rate(10s)
acl high_error_rate sc1_http_err_rate gt 50
# Reject connection immediately if error rate is high
tcp-request content reject if high_error_rate
5. Infrastructure as Code (Ansible)
You cannot manage 20 microservices manually via SSH. You need reproducibility. We rely heavily on Ansible. Below is a snippet from a playbook we use to provision a secured Docker host on a fresh CoolVDS instance. This setup ensures that the Docker daemon is not exposed to the outside worldâa common security hole.
---
- name: Secure Docker Host Provisioning
hosts: microservices_nodes
become: yes
tasks:
- name: Install required system packages
apt:
name: ["apt-transport-https", "ca-certificates", "curl", "software-properties-common", "gnupg-agent"]
state: present
update_cache: yes
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Install Docker Engine (v19.03)
apt:
name: "docker-ce=5:19.03.13~3-0~ubuntu-focal"
state: present
- name: Configure Docker to listen on private IP only
copy:
dest: /etc/docker/daemon.json
content: |
{
"ip": "127.0.0.1",
"icc": false,
"no-new-privileges": true
}
notify: Restart Docker
handlers:
- name: Restart Docker
service:
name: docker
state: restarted
Conclusion: Infrastructure Defines Architecture
Microservices are not a magic bullet. They are a trade-off. You are trading code complexity for infrastructure complexity. To win this trade, your foundation must be solid. You need raw compute power (KVM to avoid noisy neighbors), extreme I/O speed (NVMe), and legal safety (Norwegian jurisdiction).
Don't let high latency or legal uncertainty kill your project. Build your cluster on infrastructure designed for the realities of 2020.
Ready to decouple? Deploy a high-performance NVMe instance on CoolVDS in Oslo today and ping 127.0.0.1 like you mean it.