Console Login

Kubernetes vs. Docker Swarm: The 2018 Orchestration Battleground for Norwegian DevOps

The Container Wars: Kubernetes vs. Docker Swarm in the Shadow of GDPR

It is April 2018. We are exactly one month away from the GDPR enforcement date (May 25th), and the panic in European tech hubs is palpable. If you are running infrastructure in Oslo, Bergen, or Trondheim, you aren't just worried about uptime anymore; you're worried about Datatilsynet knocking on your door because a rogue container replicated user data to a bucket in Virginia.

I have spent the last three weeks migrating a high-traffic e-commerce platform from a monolithic LAMP stack to microservices. The question wasn't if we should use containers, but how to manage them without losing our minds. The industry is currently split into two distinct camps: the Google-backed juggernaut Kubernetes (now on version 1.10) and the native simplicity of Docker Swarm.

Most VPS providers lie to you. They sell you "vCPUs" that are heavily overcommitted. When you try to run a container orchestrator on top of that, you enter a world of pain. etcd is ruthless about disk latency. If your underlying storage chokes, your cluster loses quorum, and your production environment goes dark.

The Contenders

1. Kubernetes (K8s): The Enterprise Beast

Kubernetes has won the mindshare war. With the release of 1.10 last month, we finally got mature storage standardisation (CSI) and better DNS support (CoreDNS). But let's be honest: K8s is complex. It introduces a steep learning curve that can paralyze smaller teams.

However, for granular control, nothing beats it. Here is what a standard Deployment looked like in our staging environment this morning. Note the resource limits—absolutely critical to prevent the "noisy neighbor" effect inside your own cluster:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: checkout-service
  labels:
    app: checkout
spec:
  replicas: 3
  selector:
    matchLabels:
      app: checkout
  template:
    metadata:
      labels:
        app: checkout
    spec:
      containers:
      - name: checkout
        image: registry.coolvds.com/checkout:v2.4
        resources:
          limits:
            memory: "512Mi"
            cpu: "500m"
        ports:
        - containerPort: 8080
        env:
        - name: DB_HOST
          value: "10.244.0.5"

2. Docker Swarm: The Pragmatic Choice

Swarm is built into the Docker engine. There is no extra binary to install. You initialize it, you join nodes, and you are done. For teams of fewer than 10 engineers, Swarm is often the better choice. It respects your docker-compose.yml logic without requiring a rewrite into K8s manifests.

Deploying a stack in Swarm is deceptively simple:

# docker-compose.yml for Swarm
version: '3.3'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

Command to deploy:
$ docker stack deploy -c docker-compose.yml production_stack

The Hidden Bottleneck: Why Hardware Matters More Than Software

Here is the war story. Last year, we tried to run a three-node Kubernetes cluster on a budget provider using standard spinning rust (HDD) storage. The cluster was unstable. Pods would randomly crash. We spent weeks debugging network overlays (Flannel vs. Calico) thinking it was a software issue.

It wasn't. It was I/O Wait.

Kubernetes relies on etcd as its brain. Etcd requires extremely low latency to write state changes to disk. If the fsync takes too long, the heartbeat fails, and the master node thinks the worker is dead.

Pro Tip: Check your etcd disk latency requirements. If your 99th percentile fsync duration is over 10ms, your cluster will become unstable. You cannot run serious container orchestration on shared HDD hosting.

This is why we standardized our infrastructure on CoolVDS. We use their KVM-based instances which provide true hardware virtualization, not just container-level isolation like OpenVZ. More importantly, the NVMe storage tiers on CoolVDS provide the IOPS required to keep etcd happy. When you are running a database inside a container (like MySQL or MongoDB), that NVMe speed is the difference between a 200ms page load and a 2s timeout.

Tuning MySQL for Containers (My.cnf)

If you are brave enough to containerize your database in 2018, you must tune the InnoDB buffer pool to respect the container limits. If MySQL tries to grab more RAM than the Docker cgroup allows, the OOM Killer will murder your database process instantly.

[mysqld]
# Ensure this is set to 70-80% of your container's memory limit
innodb_buffer_pool_size = 2G
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 2 # Speed over ACID strictness for non-financial data
skip-name-resolve # Avoid DNS lookups on every connection to reduce latency

The Norwegian Context: GDPR and Data Residency

With May 25th approaching, the "Cloud Act" in the US and GDPR in Europe are creating a legal minefield. Datatilsynet has been clear: you must know where your data lives. If you use a managed Kubernetes service from a US giant, can you guarantee that a snapshot won't be replicated to a non-GDPR compliant region?

Hosting on CoolVDS gives us certainty. The data centers are in Europe. The latency to NIX (Norwegian Internet Exchange) is negligible. For a Norwegian business, this isn't just about performance; it's about legal compliance. We keep the data within the EEA, on hardware we control, with strict firewall rules protecting the cluster API.

Comparison: Which one to pick?

Feature Kubernetes 1.10 Docker Swarm
Learning Curve Steep (Weeks/Months) Low (Hours/Days)
Scalability 5000+ nodes ~1000 nodes (Practical limit)
Load Balancing Requires Ingress Controller (Nginx/Traefik) Built-in Mesh
Storage CSI (Mature) Volume Plugins (Basic)

Final Verdict

If you are a team of 50 developers building a microservices architecture that needs to scale to millions of users, bite the bullet and learn Kubernetes. The ecosystem is growing too fast to ignore.

But if you are a pragmatic team just trying to ship code before the GDPR deadline, Docker Swarm is a perfectly valid, robust choice. It works.

Regardless of the software, the hardware dictates your reality. Do not let high latency destroy your orchestration efforts. We deploy our clusters on CoolVDS because we need the raw NVMe performance and KVM stability to sleep at night. A saved euro on hosting is not worth a 3 AM outage.

Ready to test your cluster? Spin up a high-performance NVMe instance on CoolVDS today. Check your latency, verify your compliance, and be ready for May 25th.