Console Login

Kubernetes vs. Docker Swarm in a Post-Schrems II World: Why Your Infrastructure Choices Just Got Harder

Kubernetes vs. Docker Swarm in a Post-Schrems II World: Why Your Infrastructure Choices Just Got Harder

Let’s cut through the marketing noise. If you are running infrastructure in Europe right now—specifically here in the Nordics—your job just got significantly more complicated. The CJEU (Court of Justice of the European Union) dropped the Schrems II ruling last month, effectively killing the Privacy Shield. If you are blindly piping Norwegian customer data into US-owned managed Kubernetes services, you are now walking a compliance minefield.

As a systems architect who has spent too many nights debugging split-brain scenarios in distributed clusters, I’m seeing a massive pivot back to self-managed infrastructure. You need control. You need to know exactly where the bits live. But this brings us back to the eternal debate: How do we orchestrate the containers?

We are going to look at the three contenders relevant in 2020: Kubernetes (K8s), Docker Swarm, and HashiCorp Nomad. We aren't just looking at features; we are looking at how they perform on the metal. Because if your underlying I/O is trash, your orchestrator won't save you.

The Heavyweight: Kubernetes (v1.18+)

Kubernetes is the standard. We know this. But it is also a resource hog that demands respect. The most common failure mode I see in production isn't a configuration error; it's etcd latency. Kubernetes stores its state in etcd, and etcd requires incredibly fast fsync operations to maintain quorum.

If you run a K8s control plane on a budget VPS with shared HDD storage or throttled SSDs (looking at you, budget cloud providers), your cluster will fall apart under load. The API server times out, leader election fails, and pods stop scheduling.

Here is a typical production-grade kubeadm configuration we use to initialize clusters that actually survive traffic spikes. Note the specific feature gates active in v1.18:

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.6
controlPlaneEndpoint: "k8s-api.norway-zone1.coolvds.com:6443"
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
etcd:
  local:
    dataDir: /var/lib/etcd
    extraArgs:
      # Critical for performance on high-latency networks, though less relevant on local NVMe
      heartbeat-interval: "250"
      election-timeout: "2500"
      # Prevent the database from growing out of control
      quota-backend-bytes: "8589934592"
      auto-compaction-retention: "1"
apiServer:
  extraArgs:
    authorization-mode: "Node,RBAC"
    enable-admission-plugins: "NodeRestriction,PodSecurityPolicy"

The Storage Reality Check

Before you even kubectl apply, you need to verify your disk latency. On a CoolVDS NVMe instance, we consistently see write latency well below the critical threshold for etcd (10ms). On standard cloud block storage, this often spikes to 50ms+, causing leader drops.

Test your disk before deploying K8s. Don't guess.

# The 'fio' test that breaks cheap hosting:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=/var/lib/etcd \
  --size=100m --bs=2300 --name=etcd_perf_test

If the 99th percentile fsync latency is above 10ms, do not deploy Kubernetes there. You will regret it.

The Pragmatic Choice: Docker Swarm

"Swarm is dead," they say. Yet, half the calls I get are from teams drowning in Kubernetes YAML manifest complexity who just want to deploy a web app. Docker Swarm mode (baked into Docker CE 19.03+) is robust, secure by default, and requires a fraction of the overhead.

Swarm's raft consensus is less sensitive to disk latency than etcd, making it more forgiving on varied hardware. However, it lacks the rich ecosystem of Helm charts and Operators.

A typical stack deployment in 2020 looks like this:

version: "3.8"
services:
  web:
    image: nginx:1.19-alpine
    deploy:
      replicas: 4
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - "node.role == worker"
          - "node.labels.region == oslo"
    ports:
      - "80:80"
    networks:
      - frontend

  api:
    image: my-registry.com/backend:v2.4
    deploy:
      replicas: 2
    environment:
      - DB_HOST=db
    networks:
      - frontend
      - backend

networks:
  frontend:
  backend:

Simple. Readable. Effective. If you don't need custom CRDs or Service Mesh complexity, Swarm on a cluster of CoolVDS instances connected via private networking is incredibly cost-effective.

The Challenger: HashiCorp Nomad (v0.12)

Nomad is the middle ground. It's a single binary. It schedules containers, but also Java jars and raw binaries (great for legacy migration). It integrates perfectly with Consul for service discovery.

We often use Nomad for high-performance workloads where the overhead of the Docker daemon or Kubelet is unwanted. Nomad v0.12 just dropped recently with improved CSI (Container Storage Interface) support, making it a viable stateful workload orchestrator.

The Infrastructure Verdict: Why CoolVDS?

Regardless of which orchestrator you choose, the laws of physics and the laws of Norway still apply. Here is the architectural reality:

  1. Data Sovereignty: With the Privacy Shield gone, hosting data on US-controlled hyperscalers is legally risky for Norwegian businesses processing personal data. CoolVDS is European infrastructure. Your data stays in Oslo.
  2. The Noisy Neighbor Problem: In a shared container environment (like managed K8s), your network and CPU can be stolen by other tenants. We use KVM virtualization. Your CPU cycles are yours.
  3. I/O Bottlenecks: As shown with the etcd example, storage speed dictates cluster stability. We don't upsell NVMe as a "premium" tier; it's the standard.
Pro Tip: When setting up a cluster across multiple VPS nodes, always use the private network interface (usually eth1 or similar on our platform) for cluster traffic (etcd peer communication, overlay networks). This keeps your latency minimal and doesn't count against your public bandwidth quota.

Here is how you check your latency to the Norwegian Internet Exchange (NIX) from your node. Low latency here means faster response times for your Norwegian users:

mtr --report --report-cycles=10 nix.no

If you are seeing single-digit milliseconds, you are in the right place for high-frequency trading or real-time bidding apps.

Conclusion

If you are a massive enterprise with a dedicated DevOps team, Kubernetes on bare metal or high-performance KVM is the way. Just ensure your underlying storage can handle the etcd write load (CoolVDS NVMe instances are built for this).

If you are a lean startup needing to ship fast, don't be ashamed to use Docker Swarm. It works, and it will cost you significantly less in maintenance hours.

Whatever you choose, stop building on sluggish, oversold clouds that create compliance headaches. Own your stack.

Ready to build a cluster that doesn't time out? Spin up three CoolVDS NVMe instances in Oslo and deploy your control plane in under 60 seconds.