Console Login

Container Wars 2014: Docker vs. LXC vs. OpenVZ – Architecting for Stability

Container Wars 2014: Docker vs. LXC vs. OpenVZ – Architecting for Stability

It is January 2014. If you spend time on Hacker News, you might believe that virtual machines are dead and everything should be running inside a Docker container by next Tuesday. As a Systems Architect who actually has to answer the phone when the servers catch fire at 3 AM, I am here to tell you: slow down.

The hype around containerization is justified—the efficiency gains are massive—but the tooling is still in its infancy. We are seeing developers push code to production that relies on experimental features, assuming that the isolation on a Linux container is as robust as a hardware hypervisor. It isn't.

In this analysis, we will cut through the marketing noise. We are going to look at the three main players in the virtualization/containerization space right now: OpenVZ, LXC (Linux Containers), and the newcomer Docker. We will discuss why, for a serious Norwegian business handling sensitive data, the underlying infrastructure matters more than the container format you choose.

The "Noisy Neighbor" Reality

Before we touch Docker, we must address the elephant in the room: OpenVZ. For years, cheap VPS hosting in Europe has been dominated by OpenVZ. It relies on a shared kernel architecture. While efficient, it creates a dangerous scenario for production environments known as the "noisy neighbor" effect.

Pro Tip: If you are hosting on an OpenVZ node, check your /proc/user_beancounters. If you see the failcnt incrementing, your provider is hitting their resource limits, and your application is paying the price in latency. Real isolation requires KVM.

I recently audited a Magento shop hosting on a legacy OpenVZ cluster. They were experiencing random 500-millisecond lockups. It wasn't their code; it was another tenant on the same physical node running a massive MySQL backup script. This is why at CoolVDS, we strictly enforce KVM (Kernel-based Virtual Machine) virtualization. We give you a dedicated kernel. What you do with it—including running containers inside it—is your business, and your neighbors can't touch you.

Docker (v0.7): The Bleeding Edge

Docker is currently at version 0.7. It has revolutionized how we package applications, utilizing AUFS (Another Union File System) to layer images. It turns the nightmare of dependency management into a simple Dockerfile.

However, orchestrating Docker across multiple hosts is currently a manual affair. There is no magic clustering tool yet. If you are deploying Docker in production today, you are likely wrapping it in Chef, Puppet, or perhaps Ansible 1.4 if you prefer Python.

The Persistence Problem

One major pain point in 0.7 is data persistence. Containers are ephemeral. If a container dies, the data inside it dies unless you explicitly map volumes. Here is how we structure a resilient deployment command for a web service, ensuring logs and data survive a restart:

# Docker 0.7 syntax - explicitly mapping volumes for safety

sudo docker run -d \
  -p 80:80 \
  -v /var/log/nginx:/var/log/nginx \
  -v /srv/www/html:/usr/share/nginx/html \
  -name production_web \
  coolvds/nginx-custom:latest

Note the explicit volume mapping. If you forget this, and the Docker daemon crashes (which, let's be honest, it might), your access logs are gone. For a business adhering to strict Norwegian auditing standards, losing logs is unacceptable.

LXC: The Mature Alternative

If Docker feels too experimental for your banking or healthcare client, LXC is the answer. It provides the same lightweight isolation but behaves more like a traditional VM. You get a full init system, meaning you can run service nginx restart inside it just like you are used to.

Setting up LXC on a CoolVDS KVM instance gives you the best of both worlds: hardware-level isolation from us, and lightweight containerization for your internal microservices.

# Creating a persistent LXC container on Ubuntu 12.04/13.10

sudo lxc-create -t ubuntu -n database_slave
sudo lxc-start -n database_slave -d

# Confirming resource isolation
sudo lxc-cgroup -n database_slave memory.limit_in_bytes 512M

This approach allows you to "slice up" a powerful CoolVDS dedicated instance into multiple isolated environments without the overhead of running nested hypervisors.

Orchestration: The Missing Piece

Right now, there is no industry standard for managing a cluster of containers. We are seeing some interesting work from the CoreOS team with etcd and fleet, but these are alpha tools. For 2014, the reliable architecture pattern is:

  1. Infrastructure: Solid KVM VPS instances (Pure SSD) to handle high I/O.
  2. Configuration Management: Ansible or SaltStack to provision the Docker/LXC host.
  3. Service Discovery: HAProxy or Nginx acting as a load balancer in front of the containers.

Configuring Nginx as a Container Gateway

To route traffic to multiple backend containers running on different ports (since we can't bind port 80 multiple times), we use an Nginx upstream configuration. This is standard, boring, and it works.

upstream backend_cluster {
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
    server 127.0.0.1:8083;
}

server {
    listen 80;
    server_name api.norway-client.no;

    location / {
        proxy_pass http://backend_cluster;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        # Essential for fast failover if a container dies
        proxy_next_upstream error timeout invalid_header http_500;
    }
}

Latency, Compliance, and Geography

Why does all this technical detail matter? Because in Norway, we have specific challenges. The Datatilsynet (Data Protection Authority) is rigorous. If you are storing customer data in a volatile container that writes to a temporary filesystem, you are risking compliance failures.

Furthermore, latency to the NIX (Norwegian Internet Exchange) in Oslo is critical for user experience. Running heavy container abstraction layers on top of slow mechanical hard drives is a recipe for disaster. Containers generate massive amounts of random I/O (Input/Output) operations.

This is why we built CoolVDS on Enterprise SSD arrays with RAID 10. We don't just cache writes; we persist them instantly. When you run docker commit, it hits the disk immediately. If you are comparing hosting providers, run a simple dd test or ioping. If you aren't seeing the throughput you paid for, your container strategy will fail regardless of whether you use Docker or LXC.

The Verdict for 2014

Docker is the future—I have no doubt about that. But for your core database and mission-critical legacy apps, standard KVM virtualization remains the king of reliability. Use Docker for your stateless frontend workers, use LXC for your internal tooling, but place them all on a foundation that guarantees resources.

Don't let your infrastructure be the bottleneck for your innovation. Spin up a KVM-based, SSD-powered instance on CoolVDS today, and build your container cluster on solid ground.