The Orchestration Hangover
It is December 2023, and if I see one more LinkedIn thought leader blindly recommending Kubernetes for a static website hosting three JPEGs, I might just rm -rf my root directory. I have spent the last decade debugging distributed systems across Europe, from freezing server rooms in Tromsø to hyper-scale data centers in Amsterdam. The truth is uncomfortable: complexity is technical debt.
We are currently seeing a massive repatriation trend. Teams that went "cloud-native" in 2020 are now looking at their AWS bills and migrating back to bare metal or high-performance VPS solutions to regain control. But once you move off managed EKS or GKE, you have to run the control plane yourself. That changes the math entirely.
In this analysis, we look at the three contenders relevant right now: the industry standard (Kubernetes), the unkillable zombie (Docker Swarm), and the pragmatic alternative (Nomad). We will judge them on operational overhead, latency sensitivity, and suitability for the Norwegian regulatory landscape.
1. Kubernetes (K8s): The Enterprise Bazooka
Let’s be real. Kubernetes is the operating system of the cloud. By version 1.28 (released August 2023), it has stabilized significantly, removing a lot of the API churn we hated in the 1.1x days. But the operational cost of a self-hosted K8s cluster is non-zero. It requires strict adherence to best practices, particularly regarding etcd.
I recently audited a cluster for a FinTech startup in Oslo. They were suffering from random API server timeouts. The culprit? High disk latency on their control plane nodes. etcd writes to disk synchronously. If your underlying storage (fsync) takes longer than 10ms, your cluster stability evaporates.
Pro Tip: Always run etcd on NVMe storage. On CoolVDS NVMe instances, we consistently measure write latencies below 2ms, which is critical for Raft consensus stability.Configuration Reality Check
If you are deploying K8s, do not stick with defaults. Here is a snippet of a kubeadm configuration ensuring you aren't exposing your control plane unnecessarily, a common mistake I see in audit logs:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
podSubnet: 10.244.0.0/16
apiServer:
extraArgs:
authorization-mode: Node,RBAC
# Prevent anonymous requests strictly
anonymous-auth: "false"
enable-admission-plugins: NodeRestriction,PodSecurity
controllerManager:
extraArgs:
# Fail fast if the control plane is zombie
node-monitor-grace-period: 20sWhen to use K8s:
- You have a team of at least 3 DevOps engineers.
- You need the rich ecosystem (Helm, Operators, Prometheus integration).
- You are running microservices with complex inter-dependencies.
2. Docker Swarm: The "Good Enough" Hero
Docker Inc. might be focusing elsewhere, but Swarm Mode (built into Docker Engine 24.x) refuses to die. Why? Because it works. It is boring, predictable, and takes exactly two commands to set up. For many SMEs in Norway who just need to host a few Ruby or Python apps behind an Nginx proxy, K8s is overkill.
However, Swarm has limitations. Its overlay network can get jittery at scale (50+ nodes), and custom resource definitions don't exist. Yet, for a simple setup, the declarative simplicity is beautiful.
Here is a production-ready stack file. Notice the resource limits—never deploy containers without them, or one memory leak in your app will OOM-kill the host agent.
version: "3.9"
services:
web_app:
image: registry.coolvds.com/app:v2.4
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.50"
memory: 512M
reservations:
cpus: "0.25"
memory: 256M
networks:
- internal_overlay
networks:
internal_overlay:
driver: overlay
driver_opts:
encrypted: "true" # GDPR requirement if PII flows between nodesWhen to use Swarm:
- You are a solo developer or small team.
- You want to go from "zero" to "deployed" in 10 minutes.
- Your architecture is relatively flat (Web -> API -> DB).
3. HashiCorp Nomad: The Unix Philosophy
Nomad (v1.6 is current as of late 2023) is my personal favorite for mixed workloads. Unlike K8s, which focuses solely on containers, Nomad can schedule raw binaries, Java JARs, and even QEMU virtual machines. It integrates seamlessly with Consul for networking and Vault for secrets.
Nomad is a single binary. It is incredibly lightweight. I have run Nomad clients on 512MB VPS instances without issues, whereas a Kubelet would be struggling for air. For high-performance computing or batch processing, Nomad is superior because it gets out of your way.
A typical Nomad job specification looks like this HCL block:
job "payment-processor" {
datacenters = ["oslo-dc1"]
type = "service"
group "api" {
count = 3
network {
port "http" {
static = 8080
}
}
task "server" {
driver = "docker"
config {
image = "my-org/payment:1.4.2"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
service {
name = "payment-api"
port = "http"
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
}
}
}The Infrastructure Foundation: Latency and Sovereignty
Regardless of which orchestrator you choose, the laws of physics apply. Orchestration relies on a consensus algorithm (Raft for Swarm/Nomad/Etcd). These algorithms are chatty. If your nodes are spread across cheap, oversold VPS providers with noisy neighbors stealing CPU cycles, your cluster will partition.
This is where the "CoolVDS Factor" becomes an architectural decision, not just a hosting one. We use KVM (Kernel-based Virtual Machine) virtualization exclusively. Unlike OpenVZ or LXC, KVM provides hard hardware virtualization. If a neighbor spikes their CPU, your etcd heartbeats don't get delayed. This prevents the dreaded "split-brain" scenario.
Data Sovereignty in 2023
Since the Schrems II ruling, moving personal data outside the EEA is a legal minefield. Using US-based hyperscalers requires complex Standard Contractual Clauses (SCCs) and Transfer Impact Assessments (TIAs). Hosting your orchestration layer on CoolVDS in Norway simplifies this compliance overhead immediately. Your data stays under Norwegian jurisdiction.
Comparison Matrix
| Feature | Kubernetes | Docker Swarm | Nomad |
|---|---|---|---|
| Complexity | High | Low | Medium |
| Resource Overhead | Heavy (~1GB+ RAM for control plane) | Minimal | Very Low (Single Binary) |
| State | etcd (Sensitive) | Raft (Built-in) | Raft (Built-in) |
| Workloads | Containers Only | Containers Only | Containers, Binaries, Java, VM |
| Best For | Enterprise Microservices | Simple Web Apps | Hybrid / High Performance |
Verdict: Choose Based on Pain Tolerance
If you need industry-standard resume fodder and have the budget for control plane redundancy, go with Kubernetes. But ensure your underlying storage is fast. Slow I/O kills K8s.
If you want to sleep at night and your stack is simple, Docker Swarm is still valid in 2023. Do not let the hype tell you otherwise.
If you want engineering purity and efficiency, Nomad is the tool. It scales massively without the bloat.
Whatever you choose, build it on iron that doesn't bend. Deploy a high-performance, KVM-backed instance on CoolVDS today, and check your latency from Oslo. Low ping means happy consensus algorithms.