Docker in Production: Surviving the Orchestration Wars of 2015
It has been exactly two years since Docker burst onto the scene, turning our predictable world of VM provisioning upside down. If you are like me, you have spent the last six months migrating monoliths into containers, celebrating the death of "dependency hell." But now you are facing a new, much uglier beast: Orchestration.
Running `docker run` on your laptop is fun. Managing 50 microservices across three different nodes while trying to keep persistence layers intact? That is where the dream turns into a 3 AM pager alert. With the Safe Harbor agreement ruled invalid by the ECJ just last month (October 2015), the pressure to host robust, compliant infrastructure right here in Europe—specifically Norway—has never been higher. We can't just dump everything into a US-managed cloud anymore without looking over our shoulders.
Today, we look at the current state of container orchestration: the heavyweights, the upstarts, and the hardware you need to actually run them without melting your CPU.
The Contenders: Kubernetes vs. Swarm vs. Mesos
The ecosystem is fragmented. Everyone wants to be the OS of the data center. Here is the reality of the landscape as of November 2015.
1. Kubernetes (The Google Way)
Version 1.0 dropped in July, and v1.1 is fresh out of the oven. Kubernetes is powerful. It brings concepts like Pods, Replication Controllers, and Services that abstraction lovers adore. But let's be honest: the learning curve is a vertical wall. Setting up an `etcd` cluster manually requires nerves of steel and a deep understanding of distributed consensus.
The Good: Self-healing is magic. If a node dies, Kubernetes reschedules the pods elsewhere. The new horizontal pod autoscaling in v1.1 is promising.
The Bad: Complexity. The sheer number of moving parts (API server, scheduler, controller manager, kubelet, proxy) means debugging is painful.
2. Docker Swarm (The Native Way)
Swarm is the native clustering tool for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Because it speaks the standard Docker API, you can use existing tools like Docker Compose against a Swarm cluster seamlessly.
The Good: Simplicity. If you know Docker, you know Swarm.
The Bad: It's still maturing. Scheduling strategies are basic compared to Kubernetes or Mesos. It feels less like an "application platform" and more like a "container scheduler."
3. Apache Mesos + Marathon (The Enterprise Giant)
Mesos has been around longer than Docker. It scales to tens of thousands of nodes (Twitter runs on it). Marathon is the framework that runs on top to manage long-running applications (like web servers).
The Good: Proven scale. It can manage non-container workloads too.
The Bad: It is heavy. Setting up Zookeeper, Mesos Masters, and Slaves for a small cluster is overkill. It’s built for the data center scale, not the startup scale.
The Infrastructure Bottleneck: Why Your VPS Matters
Here is the dirty secret orchestration tutorials don't tell you: Latency kills distributed systems.
When you have a Kubernetes master node in Oslo trying to talk to worker nodes, or an `etcd` cluster syncing state, network latency must be minimal. A jitter of 50ms can cause leader election timeouts and split-brain scenarios. This is where the underlying metal counts.
War Story: I recently debugged a Swarm cluster that kept dropping nodes. The culprit wasn't the config; it was "noisy neighbor" syndrome on a budget VPS provider. Their storage I/O was saturated by another client, causing Docker daemon timeouts. We migrated to CoolVDS, where the KVM virtualization guarantees dedicated resources, and the stability issues vanished instantly.
The Storage Problem
Docker images are layers. Pulling and extracting these layers is I/O intensive. If you are running a database inside a container (risky, but people do it), you need IOPS. Most providers are still pushing spinning rust (HDD) or low-grade SATA SSDs shared among hundreds of users.
For a production container cluster, you need NVMe storage or high-end enterprise SSDs. CoolVDS has been aggressive in rolling out high-performance storage stacks that essentially eliminate I/O wait times during image pulls.
Configuration: Getting Kubernetes v1.1 Running
If you are brave enough to choose Kubernetes, do not try to run it on OpenVZ containers. You need a modern kernel (3.10+) and full control over iptables. This mandates a KVM-based solution like CoolVDS.
Here is a snippet of a Replication Controller definition (`nginx-rc.yaml`) we used for a recent project. Note the specific resource limits—never deploy without them:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
ports:
- containerPort: 80
resources:
limits:
cpu: "500m"
memory: "128Mi"
To launch this, you need a stable connection to your API server. We utilize the low latency from the NIX (Norwegian Internet Exchange) to ensure our `kubectl` commands apply instantly, even when managing clusters from remote offices.
Data Sovereignty: The "Safe Harbor" Fallout
We cannot ignore the legal landscape. With the invalidation of Safe Harbor, storing customer data on US-controlled servers is a compliance minefield. The Norwegian Data Protection Authority (Datatilsynet) is clear about the responsibilities of data controllers.
Hosting your orchestration cluster on VPS Norway infrastructure isn't just about speed anymore; it's about legal survival. By keeping your persistent volumes and database containers on CoolVDS servers located physically in Oslo, you drastically reduce your exposure to cross-border data transfer risks.
Summary: Which one to choose?
| Feature | Kubernetes | Docker Swarm | Mesos/Marathon |
|---|---|---|---|
| Setup Difficulty | High | Low | Very High |
| Maturity (2015) | Rising Fast | Early Days | Mature |
| Best Use Case | Microservices | Simple Clustering | Big Data / Hybrid |
| Infrastructure Need | KVM / Dedicated | KVM | Bare Metal / KVM |
For most teams in 2015, Docker Swarm is the easiest entry point, but Kubernetes is where the industry is heading. Whichever you choose, remember that orchestration adds overhead. It demands CPU cycles and network bandwidth just to keep the lights on.
Don't cripple your shiny new cluster with budget hosting. You need KVM isolation, predictable I/O, and low latency. That is why for our mission-critical deployments, we provision CoolVDS instances. The combination of root access, custom kernel support, and DDoS protection gives us the peace of mind to focus on the code, not the plumbing.
Ready to build your cluster? Spin up a high-performance KVM instance in Oslo today. Deploy on CoolVDS now.