Console Login

K8s, Swarm, or Nomad? Orchestration Realities in a Post-Privacy Shield World

The CJEU Just Broke Your Cloud Strategy. Now What?

Yesterday, July 16, 2020, the Court of Justice of the European Union dropped a nuclear bomb on the transatlantic data transfer market: Schrems II. The Privacy Shield is invalid. If you are piping customer data from Norway or the EU to US-owned hyperscalers, your compliance officer is likely hyperventilating right now.

This shifts the conversation from "which cool tool should I use?" to "where does the metal actually live?" But as a systems architect, you still have to manage the stack. Data sovereignty doesn't manage itself. We are seeing a massive repatriation of workloads to sovereign European infrastructure, and you need an orchestration layer that doesn't eat 50% of your resources before you even deploy a single pod.

Let’s cut through the CNCF marketing noise. I've spent the last week benchmarking Kubernetes 1.18, Docker Swarm, and HashiCorp Nomad 0.11 on high-performance NVMe KVM slices. Here is the brutal truth about what you actually need versus what Hacker News tells you to use.

1. Kubernetes 1.18: The Heavyweight Champion (with Heavy Baggage)

Kubernetes has won the war. We know. But running a production-grade K8s cluster is not a hobby project. It requires a blood sacrifice in operational overhead. The 1.18 release (echoing the "fit and finish" theme) has improved stability, but the control plane is still a resource hog.

The Bottleneck: etcd Latency

The number one reason K8s clusters fail in production isn't bad code; it's slow disk I/O. etcd is sensitive. If fsync latency spikes, your leader election fails, and your cluster partitions. I see this constantly on budget VPS providers who oversell their HDD storage.

To verify if your current storage can handle K8s, run this fio test. If your 99th percentile fdatasync latency is over 10ms, do not install Kubernetes. You will regret it.

# Test disk latency for etcd requirements
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data \
  --size=22m --bs=2300 --name=mytest

On CoolVDS NVMe instances, we consistently see latencies under 2ms. This allows us to run aggressive etcd tuning parameters without fear of timeouts.

Defining High-Performance Storage Classes

Don't rely on default storage classes. Explicitly define your NVMe requirements. Here is the StorageClass configuration we use for high-I/O databases running inside K8s:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-nvme-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
parameters:
  fsType: ext4
  type: nvme

2. Docker Swarm: Dead or Just Sleeping?

Since Mirantis acquired Docker Enterprise late last year, the community has been nervous. Is Swarm dead? For massive enterprise clusters, maybe. For a team of five developers deploying a standard web stack? It is still unbeatable.

Swarm's advantage is simplicity. You don't need a team of three Site Reliability Engineers to manage the control plane. It just works. However, it lacks the sophisticated reconciliation loops of K8s. If you need complex stateful sets or CRDs (Custom Resource Definitions), Swarm will frustrate you.

But look at the simplicity of a deployment compared to K8s manifest sprawl:

version: "3.8"
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

One file. One command: docker stack deploy -c docker-compose.yml prod. If your goal is TCO (Total Cost of Ownership) and you are hosting standard stateless microservices, Swarm on a solid KVM VPS is the most cost-effective route in 2020.

3. HashiCorp Nomad 0.11: The Sniper

If Kubernetes is a container ship and Swarm is a tugboat, Nomad is a speedboat. The 0.11 release (May 2020) introduced CSI (Container Storage Interface) support, which was the last major hurdle for widespread adoption.

Nomad's killer feature? It doesn't just run containers. It runs binaries. Have a legacy Java JAR or a static Go binary? You don't need to wrap it in Docker if you don't want to. Nomad schedules it directly. This saves the overhead of the Docker daemon.

Here is a job specification for a redis cluster in Nomad. Notice the HCL syntax—cleaner than YAML hell.

job "redis" {
  datacenters = ["dc1"]
  type = "service"

  group "cache" {
    count = 3
    network {
      port "db" {
        to = 6379
      }
    }

    task "redis" {
      driver = "docker"
      config {
        image = "redis:5.0"
        ports = ["db"]
      }
      resources {
        cpu    = 500 # 500 MHz
        memory = 256 # 256 MB
      }
    }
  }
}

The Hardware Reality: Virtualization Overhead

Orchestrators add a layer of abstraction. If you pile that on top of a "noisy neighbor" VPS provider, your latency stacks up. Container > Docker Daemon > Guest OS > Hypervisor > Host Hardware.

You cannot control the Hypervisor, but you can choose a provider that doesn't oversubscribe CPU. At CoolVDS, we use KVM (Kernel-based Virtual Machine) which provides stricter isolation than container-based virtualization like OpenVZ/LXC. When you run a K8s node on our infrastructure, the CPU cycles you pay for are actually yours.

Pro Tip for Norway: With Datatilsynet (Norwegian Data Protection Authority) watching closely after Schrems II, ensure your encryption keys for these clusters are stored separately from the data volumes. We recommend using HashiCorp Vault on a separate, locked-down internal network interface.

Comparison: Which one fits your 2020 budget?

Feature Kubernetes 1.18 Docker Swarm Nomad 0.11
Complexity High (Steep learning curve) Low (Built into Docker) Medium (Single binary)
Resource Overhead High (~1GB+ RAM for master) Very Low Low (~50MB RAM)
Stateful Apps Excellent (StatefulSets) Poor Good (CSI plugins)
Non-Container workloads No No Yes (Java, Exec, QEMU)

Latency Matters: The Oslo Connection

If your target market is Norway or Northern Europe, physics is your enemy. Hosting your cluster in Frankfurt adds ~20-30ms round trip to Oslo. Hosting in US East adds ~90ms.

For a dynamic WordPress site or a Magento store, that latency kills your Time to First Byte (TTFB). By placing your K8s nodes in our Oslo datacenter, connected via NIX (Norwegian Internet Exchange), you drop that latency to sub-5ms for local users. Combined with our NVMe storage, the "snappiness" of the application is immediately noticeable.

Final Verdict

If you are building a resume, learn Kubernetes. If you are building a business with limited engineering resources, start with Swarm or Nomad.

But regardless of the orchestrator, the Schrems II ruling has made one thing clear: Data location is not optional anymore.

Do not let legal compliance or slow I/O sink your project. Deploy a KVM instance on CoolVDS today and get raw NVMe performance with full Norwegian data sovereignty.