Kubernetes vs. Docker Swarm: Architecting for Sovereignty in 2022
If I had a krone for every time a startup CTO told me they were deploying a three-node Kubernetes cluster to host a simple WordPress monolith, I would own half the real estate in Bjørvika by now. The industry obsession with Google-scale tooling for non-Google problems has reached a fever pitch this year, yet the fundamental constraints of physics and law remain ignored. In the wake of the Schrems II ruling, the question is no longer just "how do we orchestrate containers?" but "where does the data physically reside?" and "can we legally trust US-owned hyperscalers with Norwegian citizen data?" As we navigate the infrastructure landscape of early 2022, the choice between Docker Swarm and Kubernetes (K8s) isn't just about YAML complexity—it is about Total Cost of Ownership (TCO), latency to the Norwegian Internet Exchange (NIX), and the sheer I/O requirements of modern distributed state stores. I have seen production clusters implode not because the configuration was wrong, but because the underlying VPS storage couldn't handle the fsync latency required by etcd, causing leader elections to fail and bringing the entire control plane to its knees. This article cuts through the marketing noise to give you a pragmatic, war-tested comparison of orchestration strategies, specifically tailored for teams operating within the European Economic Area.
The Compliance Elephant: Schrems II and Data Residency
Before we touch a single line of configuration, we must address the legal reality operating in Europe today. Since the invalidation of the Privacy Shield framework, relying on US-based cloud providers (even those with "regions" in Europe) has become a legal minefield for handling sensitive personal data under GDPR. The Datatilsynet (Norwegian Data Protection Authority) has been increasingly clear about the risks of data transfers. This has driven a massive repatriation of workloads back to domestic infrastructure. When you architect your container platform, you are responsible for the entire stack. If you run Kubernetes on a managed service where you cannot verify the physical isolation of the storage or the nationality of the admin with root access, you are introducing a compliance risk vector. This is why self-hosting on high-performance, local Virtual Dedicated Servers (VDS) is becoming the reference architecture for 2022. It allows you to maintain strict control over encryption keys, network policies, and data residency, ensuring that your user data never technically leaves Norway. Efficiency is key, but sovereignty is non-negotiable.
Docker Swarm: The "Good Enough" Solution
Docker Swarm is not dead, despite what the KubeCon crowd wants you to believe. For teams of fewer than 20 engineers, or for setups where you strictly need to orchestrate stateless microservices without the overhead of a dedicated DevOps team, Swarm remains the superior choice in 2022 regarding operational simplicity. It is integrated directly into the Docker engine, meaning if you can run docker run, you can run a swarm. The cognitive load required to maintain a Swarm cluster is a fraction of K8s. However, it lacks the rich ecosystem of Operators and the granular RBAC (Role-Based Access Control) that large enterprises demand. If your requirement is simply to ensure high availability for a set of Nginx and Python containers, Swarm effectively removes the complexity barrier.
Initializing a swarm is trivial, which is its greatest strength. You don't need to install a separate binary or manage a complex certificate authority hierarchy manually unless you are doing deep customization.
docker swarm init --advertise-addr $(hostname -i)
However, the simplicity of Swarm comes with trade-offs in networking and storage orchestration. The overlay network can introduce latency if not properly tuned, and persistent storage plugins are less mature than the Container Storage Interface (CSI) drivers available in Kubernetes. Below is an example of a Swarm stack definition. Notice how we use the deploy key to handle replication and update behavior, a feature that feels native to anyone used to Docker Compose.
version: '3.8'
services:
api-gateway:
image: nginx:1.21-alpine
ports:
- "80:80"
- "443:443"
deploy:
mode: replicated
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
networks:
- app-net
backend-service:
image: registry.coolvds.com/my-app:v2.4
environment:
- DB_HOST=db-primary
deploy:
placement:
constraints:
- node.role == worker
networks:
- app-net
networks:
app-net:
driver: overlay
attachable: true
Kubernetes: The Industry Standard (With Caveats)
Kubernetes has won the orchestration war. Version 1.23 is currently the stable standard we are seeing in production, with the deprecation of Dockershim signaling a move toward containerd as the runtime. Kubernetes offers self-healing, automated rollouts and rollbacks, horizontal scaling based on CPU usage (HPA), and a massive ecosystem of Helm charts. But the cost of entry is high. A proper K8s cluster requires at least three master nodes for HA and multiple worker nodes. This architecture is resource-heavy. The kubelet, kube-proxy, and container runtime consume significant CPU and RAM before you even deploy your first pod. This is where the "noisy neighbor" problem on cheap VPS providers becomes fatal. Kubernetes relies heavily on etcd, a distributed key-value store, to maintain cluster state. etcd requires extremely low disk write latency. If your virtual server is fighting for I/O operations (IOPS) with a thousand other customers on a spinning hard disk or a congested SATA SSD, etcd will time out, and your cluster will partition.
Pro Tip: Never run a production Kubernetes cluster on shared storage with undefined IOPS limits. The consensus algorithm (Raft) used by etcd is sensitive to disk sync times. We consistently benchmark CoolVDS NVMe instances to ensure fsync latency remains under 10ms, which is critical for cluster stability.
Here is a standard StorageClass definition you might use in 2022. Note that we are defining a fast storage class, but the underlying hardware must support it. Applying this YAML to a server with slow I/O is like putting a spoiler on a tractor.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-nvme
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
type: pd-ssd
fsType: ext4
The Hidden Variable: I/O Performance and Infrastructure
This is where the discussion shifts from software to hardware. Whether you choose Swarm or Kubernetes, your orchestration layer is only as reliable as the virtual machines underneath it. In Norway, where internet speeds are high, the bottleneck is rarely the network throughput; it is the disk I/O and CPU steal time. I recently debugged a PostgreSQL cluster on Kubernetes that was suffering from random connection drops. The issue wasn't the CNI plugin or the database config; it was "CPU Steal"—the hypervisor was pausing the VM to serve other tenants. This is unacceptable for serious workloads. At CoolVDS, we utilize KVM (Kernel-based Virtual Machine) virtualization with dedicated resource allocation to eliminate this specific failure mode. When you are running a database inside a container, you need direct, unhindered access to NVMe speeds.
To verify if your current host is sabotaging your database performance, you can use fio (Flexible I/O Tester). This command simulates the write patterns of a busy database or etcd cluster. Run this on your current node. If the IOPS are below 5000 or the latency is above 15ms, you are not ready for production Kubernetes.
fio --name=etcd-test \
--rw=write \
--ioengine=sync \
--fdatasync=1 \
--directory=. \
--size=1G \
--bs=2300 \
--numjobs=1 \
--time_based \
--runtime=60 \
--group_reporting
Optimizing for the Nordic Network
Latency matters. If your users are in Oslo, Bergen, or Trondheim, hosting your cluster in Frankfurt or Dublin adds 20-40ms of round-trip time (RTT) to every request. For a dynamic application executing multiple database queries and API calls per view, this accumulates into a sluggish user experience. By deploying on CoolVDS infrastructure located directly in Norway, you reduce that latency to single digits. Furthermore, you gain the stability of the Norwegian power grid, which is one of the greenest and most stable in Europe. When configuring your ingress controllers, ensure you are tuning your TCP stack for low-latency environments.
Here are a few quick diagnostic commands every DevOps engineer should use to verify their environment before deploying:
Check for CPU steal (look at the %st column):
mpstat 1 5
Verify disk latency in real-time:
ioping -c 10 .
Check your latency to the Oslo exchange (NIX):
ping -c 4 nix.no
Conclusion: Make the Logical Choice
In 2022, the technology is mature, but the implementation is where projects fail. If you need simplicity and quick iteration, use Docker Swarm. If you need massive scale and ecosystem integration, use Kubernetes. But regardless of the orchestrator, do not compromise on the foundation. High-performance NVMe storage, strict data sovereignty, and low-latency connectivity are not optional features; they are requirements for a professional infrastructure.
Don't let slow I/O kill your SEO or your database stability. Deploy a test instance on CoolVDS today and see the difference dedicated NVMe resources make for your container workloads.