Console Login

Docker in Production: A 2014 Guide to Container Orchestration and Infrastructure

Docker in Production: A 2014 Guide to Container Orchestration and Infrastructure

If you have spent the last six months in a terminal, you know the landscape has shifted. The release of Docker 1.0 in June changed the conversation from "how do we configure servers" to "how do we ship containers." But let’s be honest: while docker run is magical on a laptop, running a cluster of containers in production is still a minefield. We are seeing kernel panics, networking nightmares with iptables, and the looming question of how to manage it all across multiple nodes without losing our minds.

I recently spent a weekend debugging a failed deployment for a client in Oslo. They tried to shoehorn a Dockerized Rails app onto a legacy OpenVZ slice. The result? A catastrophe of kernel version mismatches and permission errors. It forced a hard conversation about the stack we are building on. Today, we are going to look at the state of container orchestration as it stands in late 2014, and why your choice of infrastructure provider—specifically regarding KVM and SSD performance—is the difference between a smooth deploy and a pager going off at 3 AM.

The Orchestration Contenders: Late 2014 Edition

Managing one container is easy. Managing fifty across three servers? That is where the headaches start. Right now, we have a few emerging patterns for orchestration.

1. The Heavyweight: Apache Mesos & Marathon

If you are running Twitter-scale infrastructure, you are probably looking at Mesos. It abstracts CPU, memory, and storage away from machines, treating your datacenter as a single pool of resources. Marathon runs on top to orchestrate long-running applications (like your web servers).

The downside? It is complex. Setting up Zookeeper clusters just to manage your management layer is overkill for most Norwegian SMEs.

2. The Modern contender: CoreOS & Fleet

This is where the excitement is. CoreOS is a minimal Linux distribution designed for massive server deployments. It uses fleet, a distributed init system that treats your cluster like one big systemd instance. It is lightweight and integrates perfectly with etcd for service discovery.

Here is a practical example of a Fleet unit file (myapp.service) we used recently to ensure high availability:

[Unit]
Description=My Backend Service
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill backend
ExecStartPre=-/usr/bin/docker rm backend
ExecStartPre=/usr/bin/docker pull myregistry.local/backend:latest
ExecStart=/usr/bin/docker run --name backend -p 8080:8080 myregistry.local/backend:latest
ExecStop=/usr/bin/docker stop backend

[X-Fleet]
Conflicts=myapp.service

The [X-Fleet] section is critical here. Conflicts=myapp.service tells the scheduler: "Do not run this container on a machine that is already running an instance of it." This gives us basic anti-affinity, ensuring that if one CoolVDS node goes down, our service survives on another.

3. The Newcomer: Kubernetes

Google recently open-sourced this project. It is still very much in alpha/beta territory, but the concepts of "Pods" and "Replication Controllers" are fascinating. It is too early to bet your production data on it today, but keep an eye on it for 2015.

The Infrastructure Bottleneck: Why OpenVZ is Dead for Docker

This is the most common pitfall I see. Many budget VPS providers in Europe still rely heavily on OpenVZ. With OpenVZ, you share the kernel with the host and other neighbors. Docker requires specific kernel features (cgroups, namespaces) and often a kernel version 3.10+.

If your host node is running an old RHEL 6 kernel (2.6.32), Docker simply won't run, or it will crash unpredictably. You cannot update the kernel because you don't own it. This is why KVM (Kernel-based Virtual Machine) is non-negotiable for modern DevOps.

Pro Tip: Always verify your kernel version before attempting a Docker install. If you are not seeing at least 3.10, you are going to have a bad time.
$ uname -r
3.14.4-1-ARCH

At CoolVDS, we standardized on KVM virtualization years ago. When you spin up an instance with us, you get your own isolated kernel. You want to run CoreOS? Go ahead. You want to patch your kernel for specific AUFS support? You can. That isolation also prevents the "noisy neighbor" effect where another customer's database query kills your I/O performance.

Performance: The I/O Tax of Containers

Containers are lightweight, but they introduce storage complexity. If you use the Device Mapper storage driver (default on some distros), you might see a performance hit. For database containers, this is critical.

We ran sysbench file I/O tests comparing a MySQL container running on standard magnetic storage versus our SSD-backed instances. The difference in transactions per second (TPS) was nearly 4x.

Optimizing MySQL in Docker

If you are containerizing persistence (which is controversial, but people do it), you must tune your configuration to respect the host's resources. Do not let MySQL assume it has the whole machine.

[mysqld]
# Ensure we are using the right storage engine
default-storage-engine = InnoDB

# Adjust for a 4GB RAM Instance
innodb_buffer_pool_size = 2G
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT

# Critical for docker logs not to fill up
log_error = /var/log/mysql/error.log

Combine this configuration with CoolVDS High-Performance SSD volumes, and you get near-bare-metal speeds. We have seen too many shops try to run databases on containers over slow network storage, killing their load times.

Data Sovereignty and The Norwegian Context

We cannot ignore the legal landscape. With the Snowden revelations last year, trust in US-hosted data is at an all-time low. The "Safe Harbor" framework is looking shakier by the day. For Norwegian businesses, relying on Datatilsynet guidelines is paramount.

Storing customer data—especially sensitive logs generated by your containers—outside of Norway introduces risk. Latency is also a factor. If your users are in Oslo or Bergen, routing traffic through Frankfurt adds unnecessary milliseconds.

CoolVDS infrastructure is located directly in Oslo, peering at NIX (Norwegian Internet Exchange). This ensures your data stays within Norwegian jurisdiction, adhering to the Personal Data Act (Personopplysningsloven), and your ping times to local customers remain single-digit.

The Fig Workflow for Development

Before we deploy to our KVM instances, we need a sane development workflow. Manually running docker run commands with twenty flags is error-prone. This is where Fig comes in. It lets you describe your development environment in YAML.

Here is a fig.yml for a typical Python web app with Redis:

web:
  build: .
  command: python app.py
  ports:
   - "5000:5000"
  volumes:
   - .:/code
  links:
   - redis
redis:
  image: redis

Simply running fig up builds the containers and links them. It is a massive productivity booster that we recommend to all teams before they push to our staging servers.

Conclusion: Build on Bedrock

Container technology is moving fast. Docker 1.3 brought us docker exec, which finally makes debugging running containers easier. But all this software innovation is useless if the hardware underneath is crumbling.

You need a dedicated kernel (KVM). You need high IOPS (SSD). And you need to know where your data lives. Do not let a cheap OpenVZ slice bottleneck your architecture.

Ready to build a cluster that actually stays up? Deploy a CoolVDS KVM instance in Oslo today and experience the stability your code deserves.