Console Login

Surviving the Container Wars: Kubernetes vs. Docker Swarm vs. Mesos (2016 Edition)

Surviving the Container Wars: Kubernetes vs. Docker Swarm vs. Mesos in the Post-Safe Harbor Era

It is January 2016, and if you are a systems administrator in Europe, you are likely fighting a war on two fronts. On one side, developers are handing you Docker images and demanding they run identically in production as they did on their MacBook Airs. On the other side, the legal department is in a panic because the European Court of Justice invalidated the Safe Harbor agreement just three months ago, making data transfers to US-controlled clouds a regulatory minefield.

I have spent the last six months migrating a high-traffic media streaming platform from bare metal to a containerized infrastructure. I have seen etcd clusters disintegrate due to disk latency and I have debugged overlay networks until my eyes bled. The "Container Wars" are in full swing, and choosing the wrong orchestration tool today could cost you your weekend—or your job.

Let’s cut through the hype. We are looking at the three main contenders for managing container clusters right now: Kubernetes (v1.1), Docker Swarm (Standalone), and Apache Mesos. We will also look at why the underlying hardware—specifically KVM-based VPS with low latency to NIX (Norwegian Internet Exchange)—is the only way to run these reliably in the Nordic market.

The Contenders: A Field Report

1. Kubernetes (The Google Juggernaut)

Kubernetes (K8s) is the heavyweight. It is backed by Google’s experience running Borg, and with version 1.1 released late last year, it is finally starting to feel like a viable product for those of us not running a search engine. It introduces concepts like Pods, ReplicationControllers, and Services.

The Good: It is self-healing. If a node dies, K8s reschedules the pods elsewhere. The service discovery is built-in.

The Bad: The learning curve is vertical. Setting up a high-availability master requires intimate knowledge of etcd, flannel/Weave for networking, and PKI infrastructure. It feels over-engineered for simple web apps.

Here is what a basic pod definition looks like in K8s v1.1. Note the verbosity compared to a simple Docker run command:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-frontend
  labels:
    app: web
spec:
  containers:
  - name: nginx
    image: nginx:1.9
    ports:
    - containerPort: 80
    resources:
      limits:
        cpu: "500m"
        memory: "128Mi"

2. Docker Swarm (The Native Choice)

Do not confuse this with the rumored integration coming in future Docker versions. Right now, Swarm is a separate tool that turns a pool of Docker hosts into a single, virtual Docker host. You use the standard Docker API.

The Good: Simplicity. If you know docker run, you know Swarm. It integrates perfectly with Docker Compose.

The Bad: It is less mature regarding scheduling logic. If you need complex affinity rules (e.g., "don't put these two DB replicas on the same rack"), Swarm is currently weaker than K8s or Mesos.

3. Apache Mesos + Marathon (The Enterprise Beast)

Mesos abstracts CPU, memory, storage, and other compute resources away from machines, enabling fault-tolerant and elastic distributed systems. Marathon is the framework that runs on top of Mesos to orchestrate containers.

The Reality: Unless you are Twitter or Airbnb, or you have a dedicated team of five engineers just to manage the cluster, this is likely overkill. It is rock solid, but the operational overhead is massive.

The Infrastructure Bottleneck: Why Your VPS Matters

Orchestration tools are useless if the underlying metal is weak. This is where many DevOps engineers fail. They spin up containers on budget OpenVZ VPS instances and wonder why random processes get killed or why database latency spikes.

The Kernel Problem

Containers share the host kernel. In virtualization technologies like OpenVZ or LXC, you are essentially running containers inside containers. This often leads to issues with cgroups and kernel modules required for advanced networking (like Weave or Flannel).

Pro Tip: Always use KVM (Kernel-based Virtual Machine) for container hosts. KVM provides full hardware virtualization. Each VPS has its own kernel, allowing you to load the specific modules Docker needs without begging your hosting provider for permission.

Performance Warning: Distributed systems like etcd (used by Kubernetes) are extremely sensitive to disk write latency. If fsync takes too long, the cluster loses consensus, and your control plane crashes. We recently benchmarked CoolVDS KVM instances and found the SSD backed storage consistently delivered low-latency writes required for a stable etcd quorum, unlike shared spinning-disk hosts.

Configuration: Tuning for Performance

When running Docker on a production VPS, the default settings are rarely sufficient. Here is a configuration snippet for /etc/default/docker (on Ubuntu 14.04 LTS) to ensure you are using the correct storage driver. In 2016, aufs is common, but overlayfs is the future we are looking toward as kernel 3.18+ becomes standard.

# Use the overlay storage driver if your kernel supports it (3.18+)
# This is faster than aufs or device mapper in many workloads.
DOCKER_OPTS="--storage-driver=overlay --dns 8.8.8.8 --dns 8.8.4.4"

For your database containers, never rely on the container filesystem. Always mount a volume that maps to the host's high-speed storage. If you are on CoolVDS, you are benefiting from local SSD/NVMe speeds, so map it directly:

docker run -d \
  --name mariadb-production \
  -v /srv/mysql/data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=secure_pass \
  mariadb:10.1

The Nordic Context: Latency and Law

We cannot ignore the geopolitical landscape of 2016. With the Safe Harbor agreement invalidated, storing customer data on US-owned servers (AWS, Google Cloud, Azure) is legally risky for Norwegian businesses. The Norwegian Data Protection Authority (Datatilsynet) is watching closely.

By hosting your container cluster on CoolVDS, you achieve two things:

  1. Data Sovereignty: Your data stays in Norway/Europe, complying with strict privacy standards.
  2. Low Latency: If your users are in Oslo or Bergen, routing traffic through Frankfurt or London adds unnecessary milliseconds. CoolVDS offers direct peering at NIX (Norwegian Internet Exchange).

Network Latency Test

When connecting your application nodes, every millisecond counts. Here is a quick check you should run between your nodes:

# Install iputils-ping if not present
apt-get install -y iputils-ping

# Check latency to the local gateway and external DNS
ping -c 4 192.168.1.1
ping -c 4 8.8.8.8

On a proper infrastructure like CoolVDS, internal latency between nodes should be negligible, and external latency to Nordic ISPs should be minimal.

Conclusion: Start Small, Scale Smart

If you are just starting, do not drown in Kubernetes YAML files just because it is trendy. Docker Swarm or even simple Docker Compose setups are perfectly valid for 90% of use cases in 2016.

However, regardless of the orchestration tool, the foundation remains the same: you need root access, a dedicated kernel (KVM), and fast I/O. Do not let a cheap VPS bottleneck your container revolution.

Ready to build your cluster? Deploy a KVM-based instance on CoolVDS today. With our SSD storage and NIX connectivity, your containers will run the way they were meant to.