Taming the Docker Whale: Orchestration Showdown 2015
It is 3:00 AM. PagerDuty is screaming because your manual docker run scripts just desynchronized across three nodes. The database container can't see the web container. Sound familiar? If you are still managing Docker containers with shell scripts and SSH loops, you are doing it wrong.
We are seeing a massive shift in 2015. While configuration management tools like Chef and Puppet dominated the last five years, the container revolution demands something more dynamic: Orchestration. But the ecosystem is fragmented. Do you go with the battle-tested Apache Mesos? The Google-backed newcomer Kubernetes? or the lightweight CoreOS Fleet?
I have spent the last month deploying these stacks on high-performance infrastructure here in Norway. Here is the unvarnished truth about what actually works in production.
The Contenders
1. Apache Mesos + Marathon: The Heavyweight Champion
If you are running Twitter-scale infrastructure, this is your answer. Mesos abstracts CPU, memory, storage, and other compute resources away from machines, letting you program against your datacenter like it is a single pool of resources.
The Good: It is rock solid. It handles thousands of nodes without blinking.
The Bad: The learning curve is a cliff. Setting up Zookeeper (required for high availability) is a headache if you haven't done it before.
Pro Tip: When configuring the Mesos slave, ensure you isolate resources correctly to prevent a single container from starving the host OS. Add this to your/etc/mesos-slave/isolation:cgroups/cpu,cgroups/mem
2. Kubernetes: The Rising Star (Beta)
Google open-sourced this last year, and while it is technically still pre-v1.0, the "Pod" concept is brilliant. Instead of managing single containers, you manage logical groups of containers that share an IP and storage.
The Reality: Networking is the pain point here. Weave and Flannel are improving, but overlay networks introduce latency. If your underlying VPS has slow I/O or shared CPU steal, your Kubernetes cluster will crawl.
3. CoreOS Fleet: The Pragmatic Choice
For many of our clients at CoolVDS, Fleet is the sweet spot. It treats your cluster as a single init system. If you know systemd, you know Fleet.
Here is a unit file we use to deploy a highly available Nginx cache:
[Unit]
Description=Nginx Cache
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill nginx-lb
ExecStartPre=-/usr/bin/docker rm nginx-lb
ExecStartPRE=/usr/bin/docker pull coolvds/nginx-custom:1.8
ExecStart=/usr/bin/docker run --name nginx-lb -p 80:80 coolvds/nginx-custom:1.8
ExecStop=/usr/bin/docker stop nginx-lb
[X-Fleet]
Global=true
The Hardware Bottleneck: Why Your Host Matters
Here is what the documentation won't tell you. Container orchestration creates significant I/O overhead. Between the Docker daemon writing logs, the overlay network encapsulating packets, and the orchestrator's state checks, a standard HDD-based VPS will choke.
We ran bonnie++ benchmarks on standard SATA VPS providers versus our KVM-based NVMe instances. The difference isn't just speed; it's stability. When an orchestrator like Mesos tries to reschedule a failed task, it needs instant disk access. If you are waiting on I/O wait, your cluster thinks the node is dead and starts a reboot storm.
This is why we built CoolVDS on pure KVM with NVMe storage. We don't oversell resources because we know the kernel needs them. If you are serious about Docker, you cannot afford "noisy neighbors" stealing your CPU cycles.
Data Sovereignty in Norway
We also need to talk about where your bits actually live. With the US Safe Harbor framework under increasing scrutiny by European courts, relying on US-based cloud providers is a risk for Norwegian businesses. Datatilsynet (The Norwegian Data Protection Authority) is becoming stricter about how personal data under the Personal Data Act (Personopplysningsloven) is handled.
Hosting locally in Oslo isn't just about lower latency to the NIX (Norwegian Internet Exchange)βthough shaving 30ms off your request time is nice. It is about legal compliance. By keeping your cluster on CoolVDS hardware physically located in Norway, you simplify your compliance map significantly.
Verdict: What should you use?
| Feature | Mesos | Kubernetes | Fleet |
|---|---|---|---|
| Maturity | High (Enterprise) | Medium (Beta) | Medium |
| Setup Difficulty | Hard | Medium | Easy |
| Best For | Huge Clusters | Microservices | Systemd lovers |
If you are building a massive distributed system today, Mesos is the safe bet. If you want to future-proof yourself for where the industry is going, start experimenting with Kubernetes. If you just want to ship code now without the headache, use Fleet.
Whatever you choose, don't let slow hardware kill your orchestration. Deploy a high-performance KVM instance on CoolVDS in 55 seconds and give your containers the room they need to breathe.