Kubernetes vs. Docker Swarm vs. Nomad: The 2022 Orchestration Reality Check for Nordic Ops
If I see one more startup with three developers trying to deploy a massive Kubernetes cluster to host a simple WordPress site and a Node.js API, I’m going to unplug my server rack. It is January 2022, and the "resume-driven development" pandemic is arguably doing more damage to European infrastructure stability than the actual traffic spikes we saw last Black Friday. We are living in a post-Log4Shell world where patching speed matters more than theoretical scalability, and yet, teams are drowning in YAML files they don't understand. As a systems architect operating out of the Nordics, I have watched the container orchestration wars shift from a battle of features to a battle of complexity versus sanity. The reality of hosting in Norway—or anywhere in the EEA post-Schrems II—is that your choice of orchestrator now has legal implications alongside the technical ones. Do you go with the industry standard that requires a dedicated team to manage, the built-in solution that everyone says is "dead" but works perfectly, or the HashiCorp contender? Today, we are stripping away the marketing fluff to look at the raw I/O, the configuration overhead, and the latency realities of running containers on bare-metal-adjacent VPS infrastructure in Oslo.
The "Good Enough" Contender: Docker Swarm
Let’s address the elephant in the server room: Docker Swarm is not dead, and for 80% of the teams reading this, it is likely the superior choice for your workload in 2022. While Mirantis acquired Docker Enterprise back in 2019, the Swarm mode integration in the standard Docker Engine (currently v20.10.12) remains robust, incredibly fast to deploy, and blissfully simple compared to the cognitive load of Kubernetes. I recently consulted for a logistics firm in Bergen that was struggling with a complex K8s setup that kept crashing due to misconfigured Ingress controllers; we migrated them to Swarm in a weekend, and their operational overhead dropped by 90%. The beauty of Swarm lies in its architecture: if you can run a docker run command, you can manage a Swarm cluster, and because it relies on the standard Docker API, your CI/CD pipelines in GitLab or Jenkins need minimal changes to support it. There is no separate etcd cluster to manage (it's internal), no complex CNI plugins to debug when pods can't talk to each other, and the overlay network just works out of the box. However, you must be aware of the limitations regarding stateful workloads and the lack of advanced auto-scaling based on custom metrics which Kubernetes handles natively, though for many, these are problems for a future scale they will never reach. If your goal is to push containers to production on a low-latency VPS Norway instance without hiring a dedicated Site Reliability Engineer, Swarm is your friend.
# The simplicity of Swarm is its killer feature.
# Initialize the manager node:
docker swarm init --advertise-addr 10.0.0.5
# Create a simple overlay network for internal traffic:
docker network create --driver overlay --attachable app_net
# Deploy a stack (example docker-compose.yml):
version: '3.8'
services:
web:
image: nginx:1.21-alpine
ports:
- "80:80"
networks:
- app_net
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
The Industry Standard: Kubernetes (K8s) v1.23
Kubernetes has won the war for enterprise orchestration, but it demands a blood sacrifice in the form of hardware resources and storage performance. In 2022, running Kubernetes v1.23 (the latest stable release as of December '21) is a dream for scalability but a nightmare for I/O if your underlying infrastructure is weak. Here is a war story from last month: a client was running a 5-node K8s cluster on budget VPS providers (not CoolVDS) with standard SSDs, and their cluster kept falling apart during high-write operations. The culprit was etcd latency. Kubernetes relies heavily on etcd for state, and if the disk fsync latency exceeds a few milliseconds, the leader election times out, and the cluster starts killing nodes. This is where the hardware beneath the virtualization matters immensely; on CoolVDS, we utilize NVMe storage which provides the high IOPS required to keep etcd happy, ensuring that your API server remains responsive even when you are blasting logs or dealing with heavy database churning. Furthermore, with the deprecation of Dockershim looming (removed in the upcoming 1.24, but we are safe in 1.23), you need to be comfortable with containerd or CRI-O. If you are subject to GDPR and Norwegian data laws, self-hosting K8s on local infrastructure is often the only way to guarantee Datatilsynet compliance, as managed K8s offerings from US hyperscalers often involve data telemetry that crosses the Atlantic.
Pro Tip: When running Kubernetes on bare VPS instances, always tune your kubelet configuration for the specific hardware capabilities. If you are using CoolVDS High Performance instances, ensure you reserve compute resources for system daemons to prevent the kubelet from being starved during load spikes.
# A standard deployment manifest for 2022.
# note the resource limits - critical for stable QoS.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nordic-api
labels:
app: nordic-api
spec:
replicas: 5
selector:
matchLabels:
app: nordic-api
template:
metadata:
labels:
app: nordic-api
spec:
containers:
- name: api-server
image: my-registry.no/backend:v2.4.1
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
The Hipster's Choice: HashiCorp Nomad
If Kubernetes is a container ship and Swarm is a tugboat, HashiCorp Nomad is a speedboat that can carry anything, not just containers. In early 2022, Nomad is gaining serious traction among DevOps teams who are tired of Kubernetes' complexity but need more flexibility than Swarm. The single binary architecture of Nomad is a breath of fresh air; you download one binary, run it as a server or client, and you are done. Unlike K8s, which is opinionated about using Docker/containerd, Nomad can orchestrate Java JARs, QEMU virtual machines, or simple shell commands just as easily as Docker containers. For a legacy modernization project I handled in Oslo involving old Java 8 applications that couldn't be easily containerized due to weird kernel dependencies, Nomad allowed us to schedule them across the cluster alongside modern Go microservices without missed heartbeats. It integrates seamlessly with Consul for service discovery and Vault for secrets (also HashiCorp), creating a potent stack that is often easier to secure and audit for GDPR compliance than a sprawling K8s cluster with fifty third-party operators. However, the ecosystem is smaller, and you won't find as many "Helm charts" off the shelf, meaning you will be writing more HCL (HashiCorp Configuration Language) from scratch.
# Nomad job specification (HCL)
job "docs" {
datacenters = ["oslo-dc1"]
group "web" {
count = 3
network {
port "http" {
to = 80
}
}
task "server" {
driver = "docker"
config {
image = "httpd:2.4"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
Latency, Law, and The Hardware Layer
The Schrems II Reality
Since the CJEU Schrems II ruling in 2020, the legal landscape for European hosting has been a minefield. Many DevOps engineers ignore this, but a "Pragmatic CTO" cannot. If you use a managed Kubernetes service from a US-based provider, even in their "Europe" region, you are potentially exposing your company to legal action from Datatilsynet if encryption keys are managed by the US entity. This is why self-hosting your orchestration layer on a purely European provider like CoolVDS is not just a performance decision, it is a compliance strategy. We provide the raw compute—KVM virtualization on dedicated hardware—and you hold the keys.
Why NVMe and Latency Matter
Container orchestration is noisy. You have logs writing, overlay networks encapsulating packets, and databases syncing to disk. On standard spinning rust or cheap SATA SSDs, your I/O Wait (iowait) will spike, causing what we call "phantom latency" where the CPU is idle but the application is stalled waiting for the disk. CoolVDS instances use NVMe storage, which offers massive parallelism in I/O queues. When you scale a deployment from 2 replicas to 20 in Kubernetes, that storm of read/write operations can crush a standard VPS. On our infrastructure, it barely registers.
| Feature | Docker Swarm | Kubernetes | Nomad |
|---|---|---|---|
| Complexity | Low | High | Medium |
| Component Count | 1 (Built-in) | 4+ (Etcd, API, Kubelet...) | 1 (Binary) |
| Stateful Storage | Difficult | Excellent (CSI) | Good (CSI/Host) |
| Ideal For | Small/Med Teams | Enterprise/Complex | Mixed Workloads |
Conclusion: Choose Your Weapon
There is no silver bullet, only trade-offs. If you want to deploy in 5 minutes, use Swarm. If you need to replicate Google's infrastructure, use Kubernetes. If you have a weird mix of binaries and containers, use Nomad. But remember: an orchestrator is only as stable as the Linux kernel and hardware it runs on. Don't build a Ferrari engine and put it inside a rusted chassis.
Ready to build a compliant, low-latency cluster? Don't let slow I/O kill your SEO. Deploy a high-performance NVMe test instance on CoolVDS in 55 seconds and see the difference raw power makes.