The "Distributed Monolith" is Killing Your Sleep
It is July 2018. If you are like most of the engineering teams I talk to in Oslo, you have spent the last year chopping your perfectly functional PHP or Java monolith into twenty different microservices. You put them in Docker containers. You pushed them to Kubernetes. You felt modern.
And then the pager started going off at 3 AM.
Latency spiked. Services couldn't find each other. Debugging a single request now involves grepping logs across five different pods. Congratulations, you have built a distributed monolith. It has all the complexity of a distributed system with none of the benefits. I have been there. I have seen `502 Bad Gateway` errors haunt my dreams.
This is where a Service Mesh comes in. Specifically, we are looking at Istio (currently stabilizing around version 0.8/1.0 RC). It is not a magic wand, but it is the only way to regain control over traffic without rewriting every single application library.
Why You Need a Mesh (And Why It Hurts)
A service mesh injects a tiny proxy (Envoy) alongside every single container in your cluster. This is the "sidecar" pattern. Instead of Service A talking directly to Service B, Service A talks to its local proxy, which talks to Service B's proxy, which talks to Service B.
Pro Tip: Do not implement a service mesh just to look cool. It adds complexity. Only do it if you need observability, traffic splitting (canary deploys), or strict mTLS.
The GDPR Factor: mTLS is Non-Negotiable
With the GDPR enforcement that kicked in back in May, the Norwegian Data Protection Authority (Datatilsynet) is not playing around. If you are handling PII (Personally Identifiable Information) for Norwegian citizens, internal traffic inside your cluster should be encrypted. Relying on the perimeter firewall is 2015 thinking.
Istio handles this by automatically rotating certificates and enforcing mutual TLS (mTLS) between services. You don't change a line of application code.
Implementation Strategy: The Hard Way
Let's assume you have a Kubernetes 1.10+ cluster running. If you are trying to run this on shared hosting or OpenVZ containers, stop now. It won't work. You need a kernel. You need KVM. This is why we default to KVM virtualization at CoolVDS—you cannot run the iptables magic required for Envoy interception on shared kernels.
1. The Install
We are using the Helm charts provided by the Istio release. Tiller must be running in your cluster.
# Download the latest release (0.8.0 as of writing)
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=0.8.0 sh -
cd istio-0.8.0
# Add the bin to your path
export PATH=$PWD/bin:$PATH
# Install via Helm (Assuming Tiller is secure!)
helm install install/kubernetes/helm/istio --name istio --namespace istio-system
Wait for the pods. Use the watch command. If your underlying storage is slow, Etcd will choke here. This is why we insist on NVMe storage for all CoolVDS instances. Etcd latency directly impacts how fast these pods come up.
kubectl get pods -n istio-system -w
2. The Sidecar Injection
You have two choices: manual injection or automatic. For production in 2018, I prefer manual injection in CI/CD pipelines so I know exactly what is deployed. Automatic injection via MutatingAdmissionWebhook is cool but can be flaky if the API server is under load.
# The manual way
istioctl kube-inject -f deployment.yaml | kubectl apply -f -
Traffic Management: The Real Power
Here is where the magic happens. Let's say you are deploying a new version of your checkout service. You want 1% of traffic to go there. In the old days, you'd mess with Nginx weights. In Istio, you define a VirtualService.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: checkout-service
spec:
hosts:
- checkout
http:
- route:
- destination:
host: checkout
subset: v1
weight: 99
- destination:
host: checkout
subset: v2
weight: 1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: checkout-service
spec:
host: checkout
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
The Hidden Cost: Latency and Resources
Here is the part the glossy brochures don't tell you. Envoy is written in C++ and it is fast. Very fast. But it is still a network hop. It adds latency. In our benchmarks, a full mesh implementation adds about 4-8ms of overhead per request chain.
If your servers are in Frankfurt and your users are in Oslo, you are already fighting 25-30ms of physics. Adding 8ms on top of that is noticeable.
This is why local hosting matters. By hosting on CoolVDS infrastructure in Oslo, your base latency to local users is under 5ms. You have the "latency budget" to afford the service mesh overhead. If you host in the US, the combined latency (trans-Atlantic + Mesh) will make your application feel sluggish.
Resource Starvation
Envoy needs CPU. If you cram your K8s nodes onto cheap VPS providers that oversell their CPU (steal time), your mesh will stutter. When a packet arrives, Envoy needs to wake up, process rules, encrypt/decrypt, and forward. Context switching is heavy.
We configure CoolVDS nodes with dedicated CPU threads for this exact reason. You need consistent predictable compute, or your 99th percentile latency will skyrocket.
Observability: Seeing the Matrix
Once Istio is running, you get Grafana boards out of the box. You can see the traffic flow. You can see the 500 errors moving from the database to the API. It integrates with Jaeger for distributed tracing.
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000
Open `http://localhost:3000`. You will see the "Istio Mesh Dashboard". It is terrifyingly beautiful.
Final Verdict
Service Mesh is the future of DevOps, but it is heavy machinery. It demands respect. It demands resources.
- Don't use it for simple apps. If you have 3 containers, use Nginx.
- Ensure your infrastructure handles it. Use KVM. Use NVMe. Ensure high single-core performance.
- Keep it local. Offset the mesh latency by hosting near your users in Norway.
If you are ready to build a cluster that doesn't buckle under the weight of Envoy proxies, deploy a high-performance KVM instance on CoolVDS today. We keep the ping low so you can keep the complexity high.