Stop Treating Your Orchestrator Like a Silver Bullet
I’ve lost count of how many post-mortems I’ve sat through where the conclusion was "Kubernetes is too complex" or "Docker Swarm couldn't scale." In 90% of those cases, the software wasn't the problem. The problem was the underlying metal. You can’t put a race car engine in a rusted chassis and expect to win Le Mans.
If you are deploying container orchestration in 2023 without understanding the physical limitations of your infrastructure—specifically storage latency and network jitter—you are designing a failure. Today, we aren't just comparing feature lists. We are looking at how Kubernetes, Docker Swarm, and K3s actually behave when the pressure is on, particularly within the context of the Norwegian digital ecosystem where data sovereignty (Schrems II) and latency to NIX (Norwegian Internet Exchange) define success.
The Contenders: A Reality Check
Let's cut through the marketing noise. Here is what you are actually choosing between.
| Feature | Kubernetes (K8s) | Docker Swarm | K3s |
|---|---|---|---|
| Complexity | High. Requires dedicated ops team. | Low. Built-in to Docker CLI. | Medium-Low. Single binary. |
| State Store | etcd (extremely I/O sensitive) | Raft (less demanding) | SQLite / external DB / embedded etcd |
| Use Case | Enterprise, Hybrid Cloud | Simple web clusters, legacy | Edge, IoT, CI/CD pipelines |
The Silent Killer: Etcd and Disk Latency
Here is the war story. Last winter, a client came to me with a Kubernetes cluster that was "flapping" every night at 02:00. Nodes would mark themselves NotReady, pods would reschedule, and the site would 503 for two minutes.
They were hosting on a budget provider with "SSD" storage. What the provider didn't mention was that it was network-attached storage with massive noisy neighbor issues. At 02:00, someone else on that rack was running a backup.
Kubernetes relies on etcd. Etcd is paranoid. If the disk fsync latency exceeds a few milliseconds, it assumes the leader is dead and triggers an election. Chaos ensues.
We diagnosed it by running fio on the etcd partition. The results were terrifying:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
On their old provider, the 99th percentile latency spiked to 45ms. Etcd recommends under 10ms. We migrated the control plane to CoolVDS instances backed by local NVMe. The result? Latency dropped to 1.2ms. The cluster hasn't flapped since.
Pro Tip: If you are running your own K8s control plane, check your etcd metrics. If etcd_disk_wal_fsync_duration_seconds consistently hits 0.1s, you need to move to better hardware immediately.
Docker Swarm: The "Good Enough" Solution?
Don't let the CNCF landscape fool you; Docker Swarm isn't dead. For a standard PHP/Nginx setup targeting the Norwegian market, K8s is often overkill. Swarm allows you to spin up a cluster in seconds.
However, Swarm lacks the sophisticated ingress controllers and observability ecosystem of K8s. If you need complex traffic splitting (Canary deployments) or service meshes like Istio, Swarm will fight you. But for raw simplicity?
# Initialize the manager
docker swarm init --advertise-addr 192.168.1.10
# Join a worker (run on the worker node)
docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 192.168.1.10:2377
That’s it. You have a cluster. But remember: Swarm's overlay network adds overhead. On standard VPS providers, this encapsulation can drop throughput by 10-15%. We optimize our kernel networking stack at CoolVDS to minimize this penalty, ensuring your VXLAN traffic flows almost as fast as bare metal.
K3s: The Lightweight Champion
K3s has stripped out the legacy cloud provider plugins and alpha features from K8s. It is perfect for developers who want the K8s API without the memory tax. I run K3s for CI/CD runners.
A critical configuration often missed in K3s is the backend. By default, it might use SQLite. For production, you want to hook it into an external database or embedded etcd HA.
# /etc/rancher/k3s/config.yaml
token: "SECRET_TOKEN"
tls-san:
- "my-cluster.coolvds.net"
disable:
- "traefik" # If you prefer Nginx Ingress
node-taint:
- "CriticalAddonsOnly=true:NoExecute"
The Norwegian Context: GDPR and Schrems II
Technical architecture does not exist in a vacuum. If you are a business in Oslo handling customer data, you have Datatilsynet breathing down your neck. Using managed Kubernetes from US-based hyperscalers puts you in a grey area regarding data transfers (Schrems II ruling).
Hosting your orchestration layer on VPS Norway infrastructure solves this. You keep the data physically in Norway. You own the encryption keys. You control the compliance.
Latency Matters
Furthermore, if your users are in Scandinavia, round-trip time (RTT) to Frankfurt or Amsterdam is wasted time. Hosting in a Norwegian datacenter cuts latency from ~30ms to ~2ms. For a database-heavy application, that accumulated latency is the difference between "snappy" and "sluggish."
Optimizing the Node: Kernel Tuning
Regardless of the orchestrator, you must tune the Linux kernel on your nodes. Default distros are tuned for desktop or general usage, not high-throughput container routing.
Add this to your /etc/sysctl.d/99-k8s.conf:
# Increase the limit on in-flight events
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 524288
# Essential for high-traffic overlay networks
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
Apply it with sysctl -p. If you hit Neighbor table overflow errors in your logs, the settings above are your fix.
Why Infrastructure Choice is Binary
You can spend weeks optimizing your Helm charts, but if your underlying hypervisor is stealing CPU cycles or choking on I/O, you will fail. We built CoolVDS on KVM (Kernel-based Virtual Machine) because it offers strict isolation. Unlike OpenVZ or LXC where you share the kernel with the host (and potentially other noisy neighbors), KVM gives you dedicated resources.
When running Kubernetes, you need that isolation. You need NVMe storage that handles the 5,000 IOPS your database demands during a traffic spike without sweating. You need a network pipe that respects the low latency required for synchronous replication.
Whether you choose the complexity of K8s or the simplicity of Swarm, build it on a foundation that respects the physics of computing. Don't let slow I/O kill your uptime.
Ready to build a cluster that actually stays up? Deploy a high-performance, NVMe-backed instance on CoolVDS today and see what 1ms latency feels like.