Kubernetes vs. Docker Swarm in 2020: Orchestration Strategies for the Post-Schrems II Era
Let’s be honest: container orchestration has become the new "it works on my machine" problem. In 2020, we aren't just deploying applications; we are managing sprawling, complex distributed systems that demand constant attention. If you are a Systems Architect operating out of Oslo or anywhere in the EEA, your job just got significantly harder.
Why? Because on July 16th, the Court of Justice of the European Union (CJEU) dropped a nuclear bomb on the tech industry: Schrems II. The Privacy Shield is dead. Relying blindly on US-based hyper-scalers for storing European user data is now a legal minefield. This shifts the conversation from purely "which tool is better?" to "where does this infrastructure actually live?"
In this analysis, we are cutting through the marketing fluff. We will look at Docker Swarm versus Kubernetes (K8s) from the perspective of a team that needs high availability, low latency, and absolute compliance. We will also discuss why the underlying metal—specifically NVMe-backed VPS in Norway—matters more than your YAML files.
The Contenders: Simplicity vs. Scalability
Docker Swarm: The "Good Enough" Solution
Docker Swarm is currently in a strange place. After Mirantis acquired Docker Enterprise late last year, many predicted Swarm's death. Yet, here we are in August 2020, and Swarm is still the fastest way to go from zero to clustered. It is built deeply into the Docker Engine.
If you are running a small engineering team and you don't need Service Meshes like Istio or complex CRDs (Custom Resource Definitions), Swarm is incredibly efficient. It uses less RAM than K8s, leaving more room for your actual application on the node.
Here is how simple a stack deployment looks in Swarm. No Helm charts, no Tiller (RIP Tiller in Helm 3), just pure declarative state:
version: "3.8"
services:
web:
image: nginx:alpine
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
The Trade-off: You hit the ceiling fast. Networking is rigid. If you need advanced ingress rules or granular role-based access control (RBAC), you will find yourself writing hacky workarounds.
Kubernetes (v1.18): The Industrial Standard
Kubernetes has won the war. It is the de facto operating system for the cloud. But it is a beast. Running a control plane requires significant overhead. etcd, the key-value store that holds the cluster state, is notoriously sensitive to disk latency. I have seen entire production clusters freeze because the underlying storage couldn't handle the fsync rates required by etcd.
However, the control it offers is unmatched. In a recent project migrating a legacy PHP monolith to microservices, we utilized K8s 1.18's StartupProbes (promoted to beta recently) to handle slow-starting legacy containers without killing them prematurely. You simply cannot do that easily in Swarm.
Here is a snippet of a robust Deployment manifest ensuring high availability. Notice the specific resource limits—never deploy to a VPS without these, or a memory leak will crash your node:
apiVersion: apps/v1
kind: Deployment
metadata:
name: production-api
labels:
app: core-system
spec:
replicas: 5
selector:
matchLabels:
app: core-system
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: core-system
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- core-system
topologyKey: "kubernetes.io/hostname"
containers:
- name: api-container
image: myregistry.com/api:v2.4.1
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
The Hardware Reality: Why Latency Kills Clusters
This is where most tutorials fail you. They talk about software but ignore the metal. When you run a container orchestrator, you are generating massive amounts of "East-West" traffic (internal communication) and constant disk writes for logging and state management.
If you run Kubernetes on a budget VPS with standard SATA SSDs (or worse, spinning rust), your etcd latency will spike. If the leader election times out, your cluster thinks the master is down, and chaos ensues.
Pro Tip: Always benchmark your disk I/O before installing Kubernetes. Use fio to simulate the workload.
fio --name=etcd-test --rw=write --ioengine=sync --fdatasync=1 \
--size=100m --bs=2300 --runtime=60
If your fsync latency is consistently above 10ms, your cluster will be unstable. This is why at CoolVDS, we enforce pure NVMe storage backends on our KVM nodes. When we test our "Performance" tier instances against standard cloud offerings, the NVMe difference isn't just about boot times—it's about the stability of the K8s control plane under load.
The Schrems II Factor: Sovereignty as a Feature
Let's circle back to the legal landscape. Since the invalidation of the Privacy Shield in July, using US-owned cloud providers for processing EU citizen data requires complex "Standard Contractual Clauses" (SCCs) and supplementary measures. It is a headache that legal departments are just waking up to.
Hosting on a Norwegian provider like CoolVDS simplifies this architectural compliance. Your data resides in Oslo. It stays in the EEA. It is subject to the Norwegian Data Protection Authority (Datatilsynet), not the US CLOUD Act. For a CTO, this isn't just a technical detail; it is risk mitigation.
Deploying a K8s Cluster on CoolVDS
If you are ready to build a compliant, high-performance cluster, the "Hard Way" isn't actually that hard in 2020 thanks to kubeadm. Here is the rapid-fire setup for a CoolVDS instance running Ubuntu 20.04 LTS.
1. Prepare the Node
Disable swap (Kubernetes refuses to work with it) and load necessary kernel modules for container networking:
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
cat <
2. Install the Runtime (Containerd)
While Docker is standard, many of us are moving to containerd directly to reduce overhead, especially since K8s is moving toward CRI (Container Runtime Interface) standards.
apt-get update && apt-get install -y containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
3. Bootstrap with Kubeadm
Initialize the master node. Note the pod network CIDR, which is required for the Calico network plugin.
kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint "10.10.50.5:6443"
Conclusion
If you are a solo developer or a team of two, stick with Docker Swarm. It is robust enough and requires zero mental overhead. But for any team anticipating growth or handling complex microservices, Kubernetes is the only viable path forward in 2020.
However, software is only half the equation. In a post-Schrems II world, where your server lives is just as critical as what runs on it. By pairing Kubernetes with CoolVDS's low-latency NVMe infrastructure in Norway, you solve two problems at once: you get the IOPS required for a stable control plane, and you get the data sovereignty required by European law.
Don't let slow I/O or legal ambiguity kill your project. Spin up a high-performance instance on CoolVDS today and build on solid ground.