Orchestration on the Edge: Kubernetes vs. Docker Swarm for Norwegian Enterprises
It is February 2018, and if I hear one more recruiter ask if I have five years of Kubernetes experience, I might just rm -rf / my own laptop. The industry is in a state of manic depression regarding containers. On one hand, we have the looming May 25th GDPR deadline, forcing every CTO in Oslo to suddenly care deeply about exactly where their data lives. On the other, we have the technical reality: running distributed systems is hard.
I’ve spent the last six months migrating a monolithic Magento stack to microservices for a client in Stavanger. We started with the dream of Google-scale infrastructure. We ended up with a profound appreciation for simplicity. The question isn't just "which tool is better?" It is about which tool won't wake you up at 3:00 AM because an overlay network partition confused your service discovery.
Let's cut through the Silicon Valley marketing noise. If you are running infrastructure in Norway, you need stability, you need low latency to the NIX (Norwegian Internet Exchange), and you need to know if you should bet your stack on the complexity of Kubernetes or the simplicity of Docker Swarm.
The Current State of Affairs (Early 2018)
Kubernetes (K8s) has effectively won the mindshare war. With version 1.9 released recently, the apps/v1 API is finally stable. That is a big deal for those of us tired of rewriting YAML files every three months. However, Docker Swarm (bundled with Docker CE) is still fighting. It is integrated, it is fast, and it doesn't require a dedicated team of five engineers just to manage the control plane.
The "Battle-Hardened" Reality Check
Here is the truth: Your orchestrator is only as good as the iron it runs on. I have seen Kubernetes clusters implode not because of bad config, but because of "noisy neighbors" on cheap, oversold cloud instances stealing CPU cycles. When etcd doesn't get the disk I/O it needs to write to the WAL (Write Ahead Log), your cluster leader election fails. The cluster goes down. You lose money.
Pro Tip: Never run a production orchestrator on standard HDD VPS hosting. The random I/O requirements of container logging and state management demand NVMe storage. This is why we benchmark everything on CoolVDS NVMe instances—if the underlying I/O latency spikes above 10ms, your distributed database is already inconsistent.
Contender 1: Docker Swarm (The Pragmatic Choice)
Swarm is beautiful in its boredom. You install Docker, and you are done. There is no external database to manage. The Raft consensus is built-in. For a team of three developers managing a high-traffic media site in Trondheim, Swarm is often the correct choice.
To initialize a swarm on a CoolVDS instance, it takes literally one command:
root@node-1:~# docker swarm init --advertise-addr 10.0.0.5
Swarm initialized: current node (dxn1...) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-49nj1... 10.0.0.5:2377
That is it. You have a cluster. The networking is handled by the overlay driver, which, while occasionally finicky with MTU sizes, generally just works. Here is a typical stack deployment using docker-compose.yml version 3:
version: '3'
services:
web:
image: nginx:alpine
deploy:
replicas: 5
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
The Downside: In 2018, Swarm's future is... ambiguous. Kubernetes is sucking all the oxygen out of the room. Third-party integrations for monitoring (like Prometheus) are better supported on K8s.
Contender 2: Kubernetes 1.9 (The Heavy Artillery)
Kubernetes is not an orchestrator; it is a framework for building platforms. It is powerful, but it punishes ignorance. To run K8s on bare metal or VPS (which you should do for data sovereignty in Norway, rather than handing your data to US cloud giants), you likely use kubeadm.
Setting up the cluster requires careful tuning of the host OS. You cannot just spin it up and hope for the best. You need to disable swap, tune sysctl, and ensure your container runtime is rock solid.
# /etc/sysctl.d/k8s.conf
# Required for heavy traffic loads and CNI plugins
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
Once you initialize with kubeadm init, you are responsible for choosing a Pod Network Add-on. We use Calico or Weave Net for our Norwegian clients because they handle network policies better—essential for GDPR compliance where you must prove that Service A (Customer Data) cannot talk to Service B (Public Frontend) without authorization.
Here is what a basic Deployment looks like in K8s. Notice the verbosity compared to Swarm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.9
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi"
cpu: "500m"
The Infrastructure "Gotcha": Latency and Persistence
Whether you choose Swarm or Kubernetes, the bottleneck in 2018 is almost always storage. Containers are ephemeral, but data is forever. When you map a persistent volume to a container, that write operation needs to hit the disk immediately.
In a recent benchmark we ran between a budget VPS provider and CoolVDS, we measured etcd write latencies. On the budget provider (using shared mechanical drives), fsync latency frequently spiked to 40ms. In Kubernetes, this causes leader election timeouts. The control plane thinks the master node is dead and starts a chaotic failover process.
On CoolVDS NVMe KVM instances, fsync latency stayed consistently under 2ms.
Why does this matter? Because when Datatilsynet (The Norwegian Data Protection Authority) comes knocking post-May 2018 asking for audit logs, you cannot tell them "sorry, my cluster split-brained and lost the logs." Reliability is not optional anymore. It is a legal requirement.
Decision Matrix: What to Choose?
| Feature | Docker Swarm | Kubernetes 1.9 |
|---|---|---|
| Learning Curve | Low (Hours) | High (Weeks/Months) |
| Installation | Native (Pre-installed) | Complex (kubeadm, etcd setup) |
| Scalability | Good (~1000 nodes) | Massive (~5000+ nodes) |
| Data Persistence | Basic Volume plugins | Advanced (PVC/PV, StorageClasses) |
| Ideal For | Small/Medium Dev Teams | Enterprise / Multi-tenant |
The Local Angle: Latency to Oslo
For our clients serving the Nordic market, physics is the final arbiter. Hosting your cluster in Frankfurt or London adds 20-30ms of round-trip latency to Norwegian users. Hosting in the US adds 100ms+. By deploying your orchestrator on CoolVDS infrastructure located regionally, you slash that latency. Furthermore, you ensure that personal data on Norwegian citizens stays within the legal jurisdiction, simplifying your GDPR compliance strategy significantly.
Conclusion
If you are a team of 50 engineers building the next Spotify, use Kubernetes. The complexity is the price of admission for the features you need. If you are a lean team focused on shipping code rather than managing `iptables` rules, stick with Docker Swarm for now. It is robust, simple, and gets the job done.
But regardless of the software, do not cripple it with slow hardware. Orchestrators crave IOPS. Feed them NVMe.
Ready to build your cluster? Deploy a high-performance KVM instance on CoolVDS in under 55 seconds and stop worrying about I/O wait times.