The Party is Over: Container Orchestration in the Wake of Schrems II
If you are a Systems Architect operating in Europe, your summer vacation likely ended abruptly on July 16th. That was the day the Court of Justice of the European Union (CJEU) handed down the Schrems II ruling, effectively invalidating the Privacy Shield framework.
The implications are massive. Suddenly, relying on managed Kubernetes control planes hosted by US-owned hyperscalers is a legal minefield. If you handle Norwegian citizen data, the "easy route" of clicking a button in a US cloud console just became a compliance nightmare. The data sovereignty conversation has shifted from "nice to have" to "legal necessity."
So, we are back to basics. You need full control of your stack, running on infrastructure where you know exactly who owns the drives. But once you have that raw compute, what do you run on it? It is August 2020, and the orchestration wars are cooling down, but the winner isn't as clear-cut as the KubeCon marketing machine suggests.
I have spent the last week deploying high-availability clusters on CoolVDS NVMe instances to test the three main contenders: Kubernetes 1.18, Docker Swarm, and the newly released HashiCorp Nomad 0.12. Here is the unvarnished truth.
The Contenders: A 2020 Status Check
1. Kubernetes 1.18 (The Heavyweight)
Let’s address the elephant in the server room. Kubernetes is the de facto standard. Version 1.18 (released March 2020) brought some much-needed stability, and kubectl debug is finally in alpha. But is it right for everyone?
The Reality: K8s is complex. I've seen teams of three developers spend more time debugging CNI (Container Network Interface) plugin conflicts than writing code. It requires a dedicated etcd cluster, and etcd is famously sensitive to disk latency.
Pro Tip: If your etcd fsync latency exceeds 10ms, your cluster will become unstable. On standard HDD or shared SATA SSDs, leader elections will fail. This is why we insist on NVMe at CoolVDS. The I/O wait on a standard VPS will kill a K8s control plane under load.
2. Docker Swarm (The Zombie?)
After Mirantis acquired Docker Enterprise in late 2019, many declared Swarm dead. But in February of this year, Mirantis committed to supporting it. Why? Because it works.
The Reality: Swarm is undeniably the fastest way to go from "zero" to "cluster." You don't need a PhD in YAML to run it. However, the ecosystem is stagnant. You won't find many Helm charts for Swarm. If you are a small shop in Oslo needing to host a few microservices, K8s is overkill. Swarm is efficiency.
3. HashiCorp Nomad 0.12 (The Pragmatist)
Nomad 0.12 just dropped last month (July 2020), and it is impressive. Unlike K8s, it is a single binary. It fits into the "Unix philosophy"—it does one thing well.
The Reality: Nomad allows you to run Docker containers alongside legacy Java JARs or raw binaries on the same node. For legacy enterprises migrating to the cloud, this is a killer feature. It doesn't force you to containerize everything on Day 1.
Technical Showdown: Latency and Configuration
Let's look at the config complexity. Here is what it takes to deploy a simple Nginx service with a persistent volume.
Docker Swarm: The Human-Readable Option
In Swarm, you likely already have this file. It's just a docker-compose.yml:
version: '3.8'
services:
web:
image: nginx:alpine
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
volumes:
- web-data:/usr/share/nginx/html
volumes:
web-data:
Deploying this takes seconds: docker stack deploy -c docker-compose.yml myapp.
Kubernetes: The "Explicit" Option
In K8s 1.18, the same setup requires significantly more boilerplate. You need a Deployment, a Service, and a PersistentVolumeClaim.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: web-data
volumes:
- name: web-data
persistentVolumeClaim:
claimName: nginx-pvc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
The complexity is higher, but so is the control. You can tweak everything from livenessProbes to podAffinity. But do you need that control?
The Infrastructure Bottleneck
Regardless of which orchestrator you choose, they all share one weakness: I/O Starvation.
In a containerized environment, you might have 50 containers on a single host. If one of them decides to write logs aggressively, or if your database does a heavy table scan, the "Noisy Neighbor" effect kicks in. On traditional cloud providers, your "vCPU" is often stolen time.
This is where the hardware underneath matters. KVM virtualization (which we use strictly) provides better isolation than container-based virtualization (like OpenVZ).
When running benchmarks using fio on our NVMe tier versus standard SATA SSDs, the difference for etcd is night and day.
# Testing etcd write performance requirements
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
Results (Average fsync latency):
| Storage Type | Fsync Latency (99th percentile) | K8s Suitability |
|---|---|---|
| Standard HDD | ~40ms | Critical Failure |
| SATA SSD (Shared) | ~12ms | Risky |
| CoolVDS NVMe | < 0.5ms | Production Ready |
The Compliance Angle (Schrems II)
This is the part most technical tutorials skip. If you are deploying a Kubernetes cluster on a US-based cloud provider today, even in their "Frankfurt" region, you are potentially non-compliant with GDPR following the Schrems II ruling. The US CLOUD Act allows US authorities to demand data from US companies, regardless of where the server physically sits.
By hosting on a Norwegian provider like CoolVDS, you add a critical layer of data sovereignty. Your data resides in Oslo, governed by Norwegian law and the EEA frameworks, without the direct reach of US foreign intelligence surveillance warrants that the CJEU found problematic.
Verdict: Which One to Choose?
If you are building the next Netflix, use Kubernetes. But run it on bare-metal or high-performance NVMe VPS to prevent etcd from timing out.
If you are a lean team migrating a Docker Compose setup to production, stick with Swarm. It is not dead, and it is fast.
If you need to mix legacy apps with Docker, or want the operational simplicity of a single binary, Nomad is your winner.
But remember: an orchestrator is only as stable as the disk it writes to. Don't let IOwait be the reason you get paged at 3 AM.
Ready to build a compliant, low-latency cluster? Spin up a CoolVDS NVMe instance in Oslo today and experience what sub-millisecond latency does for your API response times.