Console Login

Taming Microservices: A First Look at Istio Service Mesh on Kubernetes in Norway

The Monolith is Dead. Long Live the Distributed Nightmare.

If you have been managing production workloads in Oslo or deploying to clients across the Nordics lately, you know the drill. We broke up our monolithic PHP and Java applications into microservices. We promised the CTO it would increase velocity. We promised the developers they could pick their own languages.

We lied about the complexity.

Suddenly, a function call is no longer a memory jump; it's a network packet traversing a virtual switch, potentially hitting a different node, subject to latency, jitter, and failure. Until this week, we were stitching this together with a hodgepodge of Netflix Hystrix, custom Nginx routing, and hope.

Google, IBM, and Lyft have just announced Istio (v0.1). It claims to solve the service-to-service communication problem without polluting your application code. As a DevOps engineer who has spent too many nights debugging intermittent 502s on NIX (Norwegian Internet Exchange), I decided to deploy this immediately on a CoolVDS KVM instance to see if it lives up to the hype.

The Architecture: Why Envoy Proxy Matters

Istio isn't just magic; it relies on the Sidecar Pattern. It injects a tiny proxy (Envoy) next to every single container in your Kubernetes pod. This proxy intercepts all network traffic.

Pro Tip: Do not attempt to run a Service Mesh on OpenVZ or LXC containers provided by budget hosts. The kernel-level iptables manipulation required by Envoy to intercept traffic often fails in shared kernel environments. We use strict KVM virtualization at CoolVDS specifically to support these advanced networking stacks.

The trade-off is resource usage. If you have 20 microservices, you now have 20 application containers and 20 proxy containers. While Envoy is written in C++ and is highly performant, it still consumes CPU cycles and RAM for connection pooling and telemetry.

Implementation: Deploying Istio on CoolVDS

Let's get our hands dirty. I am running this on a CoolVDS 'Pro-NVMe' instance (4 vCPU, 16GB RAM) running Ubuntu 16.04 and Kubernetes 1.6. You need the raw I/O of NVMe because Istio generates a massive amount of telemetry data (distributed tracing) which can choke standard SSDs.

1. The Setup

First, we download the latest release (0.1.0) which just dropped:

curl -L https://git.io/getLatestIstio | sh -
cd istio-0.1.0
export PATH=$PWD/bin:$PATH

2. Installing the Control Plane

We apply the core components to the cluster. This sets up the Manager, Mixer, and Ingress controller.

kubectl apply -f install/kubernetes/istio.yaml

Check your pods. If you see `ImagePullBackOff`, check your DNS settings. In Norway, I force my `resolv.conf` to use local ISP resolvers for speed, but Google's 8.8.8.8 is safer for pulling images from GCR (Google Container Registry).

kubectl get pods -n default
# NAME             READY     STATUS    RESTARTS   AGE
# istio-mgr-...    1/1       Running   0          2m
# istio-ingress... 1/1       Running   0          2m

3. Injecting the Sidecar

This is where the magic happens. You don't rewrite your code. You utilize the `istioctl` binary to modify your existing deployment YAMLs. Here is an example of how we modify a standard Nginx deployment:

kubectl apply -f <(istioctl kube-inject -f my-nginx-deployment.yaml)

Under the hood, this adds the `proxy_init` container which runs a script to setup `iptables` redirection:

# Inside the generated YAML
initContainers:
- name: init
  image: docker.io/istio/proxy_init:0.1
  args:
  - -p
  - "15001"
  - -u
  - "1337"

Traffic Management: The "Canary" Release

One of the biggest pain points in our Oslo datacenter has been safely rolling out updates. With Istio, we can split traffic by percentage, regardless of how many replicas we have.

Here is an Alpha version route rule to send 10% of traffic to version 2 of our service:

type: route-rule
name: my-service-canary
spec:
  destination: my-service.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: v2
    weight: 10
  - tags:
      version: v1
    weight: 90

This is far superior to the old method of "deploy one pod and hope." If v2 has a memory leak, only 10% of your users suffer, and you can revert instantly by deleting the rule.

The Hidden Cost: Latency and Resources

Nothing comes for free. In my benchmarks on the CoolVDS instance, adding the Envoy sidecar added approximately 2ms to 4ms of latency per hop. In a complex microservice chain of 10 services, that is 40ms of added overhead.

This brings us to Data Locality. With GDPR enforcement looming next year (May 2018), and the general need for speed, where you host matters.

Scenario Network Latency (Avg) Total Transaction Time
Hosted in Frankfurt (Routed to Norway) 25ms ~250ms + Compute
Hosted on CoolVDS (Oslo) 2ms ~20ms + Compute

When you add the Service Mesh overhead, you cannot afford the base network latency of hosting outside the country. You need to be as close to the user—and the NIX—as possible.

Security: mTLS Everywhere

Implementing SSL/TLS between internal services is usually a compliance requirement we ignore because certificate management is hell. Istio enables mutual TLS (mTLS) automatically. It assigns an identity to every service.

For Norwegian companies dealing with sensitive user data, this is a massive win for compliance preparation. It ensures that if an attacker compromises one container, they cannot simply sniff the traffic of the entire cluster.

Conclusion: Is it Production Ready?

Honest answer? It's version 0.1. It is bleeding edge. You will encounter bugs. The documentation is sparse. But the ability to visualize traffic flow (using the Grafana plugin) and control traffic without touching code is the future of DevOps.

If you are building microservices today, you need to prepare for this pattern. Start by ensuring your infrastructure can handle it. Service Meshes are resource-hungry beasts. They eat RAM for breakfast and require high IOPS for logging.

Don't let your infrastructure be the bottleneck. Deploy a high-performance KVM instance on CoolVDS today and start experimenting with the future of Kubernetes networking.