Kubernetes vs. Swarm vs. Nomad: The 2021 Guide to Orchestration on Nordic Infrastructure
Let’s be honest: in 2021, “Kubernetes” has become a synonym for “deployment” in too many engineering meetings. I recently consulted for a mid-sized SaaS in Oslo that was burning 40% of their monthly cloud budget just on the control plane overhead for a cluster running... wait for it... five microservices. They had built a Ferrari to drive to the grocery store.
As a Systems Architect who has spent the last decade watching the ecosystem evolve from manual chroot jails to the behemoth that is Kubernetes 1.22, I’ve learned that complexity is technical debt. If you are deploying in the Nordic region, you have an added layer of constraints: GDPR compliance (Schrems II is still fresh in our minds), latency requirements to the NIX (Norwegian Internet Exchange), and the sheer cost of compute.
Today, we aren't just comparing features. We are looking at the operational reality of running these orchestrators on bare-metal equivalent KVM VPS instances. Because if your underlying storage is slow, your shiny orchestration layer will collapse.
The State of Orchestration in Late 2021
1. Kubernetes (The De Facto Standard)
Kubernetes has won the war. With the recent deprecation of Dockershim, the ecosystem is maturing. However, K8s is resource-hungry. The etcd database—the brain of your cluster—is notoriously sensitive to disk latency. If your VPS provider is overselling storage I/O, your API server will start timing out.
2. Docker Swarm (The Pragmatic Choice)
Despite the rumors of its demise, Swarm mode is alive and well in Docker CE. It is arguably the best choice for teams of 2-5 developers who need high availability without hiring a full-time Site Reliability Engineer. The cognitive load is near zero compared to K8s.
3. HashiCorp Nomad (The Unix Way)
Nomad is the dark horse. It doesn't just run containers; it runs binaries, Java JARs, and practically anything else. It adheres strictly to the Unix philosophy: do one thing well. It doesn’t try to manage your networking or storage like K8s does; it just schedules.
The "Hidden" Bottleneck: Etcd and Disk Latency
Here is the war story. Last year, I debugged a K8s cluster that kept partitioning. The logs showed etcdserver: took too long (180ms) to execute. The culprit wasn't CPU; it was the underlying storage. The hosting provider used spinning HDDs in a RAID array that was getting hammered by neighbors.
For any production orchestration, NVMe storage is not a luxury; it is a requirement. When we benchmark CoolVDS NVMe instances, we specifically look at fsync latency, which is critical for the Raft consensus algorithm used by K8s and Swarm.
Pro Tip: Before installing K8s, benchmark your disk Fsync. If it's over 10ms, do not deploy etcd there.
Benchmarking Your Node's I/O
Run this fio command on your VPS to simulate the write pattern of etcd:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
On a standard CoolVDS instance, we typically see latencies well under 1ms. If you are seeing 50ms+, move your workload immediately.
Tutorial: Deploying Lightweight K8s (K3s) on CoolVDS
For most Norwegian startups, full Kubernetes is overkill. K3s (by Rancher) is a fully compliant Kubernetes distribution that strips out the cloud-provider bloat. It requires less than 512MB of RAM to run.
Here is how to set up a production-ready K3s node on a CoolVDS instance running Ubuntu 20.04.
Step 1: The Pre-flight Check
Disable swap (K8s hates swap) and configure the firewall.
# Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
# Allow required ports (if using ufw)
sudo ufw allow 6443/tcp # API Server
sudo ufw allow 10250/tcp # Kubelet metrics
Step 2: Install K3s with Flannel Backend
We will execute the install script, but notice we are forcing the install to use the local IP to ensure binding is correct within the private network.
curl -sfL https://get.k3s.io | sh -s - server \
--node-external-ip=<YOUR_PUBLIC_IP> \
--flannel-backend=vxlan
Step 3: Verification
Check the node status. It should be Ready in seconds due to the lack of overhead.
sudo k3s kubectl get nodes -o wide
The Compliance Angle: Why Location Matters
We cannot talk about orchestration in 2021 without mentioning Schrems II. The EU Court of Justice ruling has made using US-controlled cloud providers for processing EU citizen data legally risky.
When you deploy your orchestration layer on CoolVDS, your data resides physically in Norway. You aren't just getting low latency to Oslo users; you are gaining a significant compliance asset. Your encrypted volumes and your etcd state files stay within the jurisdiction. For a CTO, this reduces the legal attack surface significantly.
Comparison Table: Selecting Your Orchestrator
| Feature | Kubernetes (K8s) | Docker Swarm | Nomad |
|---|---|---|---|
| Learning Curve | Steep (Weeks/Months) | Low (Hours) | Medium (Days) |
| Resource Overhead | High (Requires ~2GB RAM for control plane) | Very Low (Built into engine) | Extremely Low (Binary is ~50MB) |
| Ideal Use Case | Complex microservices, Enterprise scale | Small teams, Simple web apps | Mixed workloads (Legacy + Docker) |
| Storage Requirement | High Performance (NVMe Mandatory) | Moderate | Moderate |
Configuration Deep Dive: Optimizing Nginx Ingress
Regardless of the orchestrator, you will likely use Nginx as your ingress controller. A common mistake I see is leaving the default buffer sizes, which causes 502 errors when handling large headers (common with Oauth2 tokens).
Here is a snippet for your nginx.conf ConfigMap to handle production traffic loads typical for e-commerce sites:
http {
# Optimize for high concurrency
keepalive_timeout 65;
keepalive_requests 100000;
# Buffer size optimization for large headers
client_body_buffer_size 128k;
client_max_body_size 10m;
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;
# File descriptor cache for performance
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
Conclusion: Don't Build on Sand
Whether you choose the rigor of Kubernetes or the simplicity of Swarm, your orchestrator is only as stable as the kernel it runs on. Shared container hosting often restricts kernel flags or suffers from "noisy neighbor" CPU steal, causing erratic scheduler behavior.
At CoolVDS, we provide the raw, unadulterated KVM performance required to run these control planes reliably. With 100% NVMe storage and data centers located right here in Norway, you get the low latency your users demand and the data sovereignty your legal team requires.
Ready to stabilize your stack? Deploy a high-performance KVM instance on CoolVDS today and experience the difference true isolation makes.