Console Login

Kubernetes vs. Docker Swarm in 2022: Stop Over-Engineering Your Norwegian Stack

Kubernetes vs. Docker Swarm in 2022: Stop Over-Engineering Your Norwegian Stack

Let’s get real. Most of you deploying Kubernetes today don't actually need it. I’ve watched brilliant engineering teams burn months migrating a perfectly functional monolithic PHP application into a microservices mesh, only to realize their latency doubled and their debugging time tripled. It’s the classic resume-driven development trap.

But here we are. It’s June 2022. Kubernetes has effectively won the orchestration war, forcing Docker to sell its enterprise business to Mirantis years ago. Yet, Docker Swarm refuses to die. Why? Because complexity kills velocity. If you are a team of three developers based in Oslo trying to manage a massive K8s cluster while also writing code, you aren't shipping features. You're configuring YAML files.

In this analysis, we aren't just comparing feature lists. We are looking at the operational reality of running container orchestration on VPS infrastructure in Europe, considering the recent removal of Dockershim in Kubernetes 1.24 and the legal headaches of Schrems II.

The Elephant in the Room: Kubernetes 1.24

If you upgraded your clusters last month without reading the changelog, you probably panicked when you saw the deprecation warnings turn into errors. Kubernetes 1.24 officially removed Dockershim. If you were relying on the Docker daemon as your container runtime, you had to migrate to containerd or CRI-O.

This shift highlights a critical point: Kubernetes is not a tool; it is an ecosystem that demands constant maintenance. It provides self-healing, horizontal scaling, and service discovery, but it demands a blood sacrifice in the form of control plane management.

The Infrastructure Tax

K8s is hungry. A minimal high-availability cluster needs at least three control plane nodes and three worker nodes. The real bottleneck isn't CPU; it's I/O. The brain of Kubernetes is etcd, a key-value store that demands extremely low latency to maintain cluster consensus.

Pro Tip: Never run a production K8s cluster on standard HDD or even SATA SSD VPS instances. If fsync latency exceeds 10ms, etcd will start timing out, causing leader elections and effectively taking down your API server.

Here is how you verify if your current VPS provider is strangling your cluster. Run this on your etcd node:

# Benchmark storage performance for etcd
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest

If the 99th percentile fdatasync latency is above 10ms, your cluster is unstable. This is where CoolVDS becomes the logical infrastructure choice. Our local NVMe storage arrays consistently deliver sub-millisecond I/O latency, which is mandatory for a healthy etcd cluster. Network-attached block storage often chokes here.

Docker Swarm: The "Good Enough" Hero

While Kubernetes is building abstract layers upon abstract layers, Docker Swarm remains integrated directly into the Docker Engine. You don't need to install a separate CNI plugin, a CSI driver, or an Ingress Controller just to get a "Hello World" on port 80.

For a startup in Bergen serving a standard REST API and a React frontend, Swarm is vastly superior in terms of TCO (Total Cost of Ownership). You can set up a cluster in seconds:

# On the Manager Node
docker swarm init --advertise-addr 192.168.1.10

# On the Worker Node
docker swarm join --token SWMTKN-1-49nj1cmql0... 192.168.1.10:2377

That's it. You have orchestration. No Helm charts required.

The Configuration Showdown

Let’s compare deploying a simple Nginx service.

Kubernetes (The Verbose Way)

You need a Deployment and a Service. It’s explicit, but lengthy.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21.6
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

Docker Swarm (The Concise Way)

Defined in a standard docker-compose.yml file, deployed as a stack.

version: '3.8'
services:
  web:
    image: nginx:1.21.6
    deploy:
      replicas: 3
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"

Command: docker stack deploy -c docker-compose.yml my-web

Data Residency and The Norwegian Context

Technical implementation is only half the battle. Legal compliance is the other. Since the Schrems II ruling invalidated the Privacy Shield framework, transferring personal data to US-owned clouds has become a legal minefield for Norwegian companies. The Datatilsynet (Norwegian Data Protection Authority) has been increasingly strict about where data physically resides.

When you use managed Kubernetes services from the US hyper-scalers, even if you select a "Europe" region, you are often subject to the US CLOUD Act. This is a massive risk for handling sensitive GDPR data.

Hosting your orchestration layer on CoolVDS solves this. We are a European provider. Your data stays on hardware physically located in secure data centers, governed by European law. Plus, the latency from Oslo to our standard European zones is often lower than routing to a hyper-scaler's centralized hub in Frankfurt or Dublin.

Networking and Security: The KVM Advantage

Containers are not virtual machines. They share the host kernel. If a malicious actor breaks out of a container in a multi-tenant environment, they could theoretically compromise the host.

This is why we strictly use KVM (Kernel-based Virtual Machine) virtualization for all CoolVDS instances. You get a dedicated kernel. Even if you mess up your Docker capabilities configuration, the blast radius is contained to your VPS. You aren't sharing a kernel with your neighbor.

Here is how you might harden your Docker daemon on a CoolVDS instance to prevent IP spoofing, a common issue in container networks:

# /etc/docker/daemon.json
{
    "icc": false,
    "userns-remap": "default",
    "no-new-privileges": true,
    "live-restore": true,
    "userland-proxy": false
}

Disabling inter-container communication (`"icc": false`) ensures that compromised containers can't laterally move to other containers on the same bridge network unless explicitly linked.

When to Choose Which?

Feature Kubernetes Docker Swarm
Learning Curve Steep (Months) Shallow (Hours)
Scalability 5000+ Nodes ~100 Nodes (begins to struggle)
Load Balancing Ingress Controllers (Complex) Built-in Mesh (Simple)
Ideal Use Case Enterprise Microservices SMBs, Simple Stacks

Final Verdict

Don't adopt Kubernetes just because Google uses it. Google has 5,000 site reliability engineers. You have Bob. If your application can run on Docker Swarm, use Swarm. It will save you money and sanity.

However, if you need the raw power of Kubernetes, ensure your foundation is solid. Orchestrators amplify the weaknesses of the underlying infrastructure. Slow disks kill etcd. Flaky networks kill consensus. CoolVDS provides the high-performance NVMe compute and rock-solid network stability required to run these beasts without waking up at 3 AM.

Ready to build a cluster that actually stays up? Spin up a high-performance instance on CoolVDS today and see the I/O difference for yourself.