Console Login

Container Security in 2014: Locking Down LXC and Docker After the Heartbleed Wake-Up Call

Container Security in 2014: Locking Down LXC and Docker After the Heartbleed Wake-Up Call

It has been exactly 11 days since the Heartbleed bug (CVE-2014-0160) went public. If you are anything like me, you have spent the last week rotating SSL certificates, patching OpenSSL libraries, and explaining to management why ‘secure’ servers were suddenly leaking memory like a sieve. The dust is settling, but it leaves us with a lingering paranoia about isolation.

Right now, everyone in the DevOps scene from Oslo to Silicon Valley is talking about Docker (currently at version 0.10) and the underlying LXC (Linux Containers) technology. It is lightweight, it is fast, and it boots in milliseconds. But let’s be honest: in a shared kernel environment, how secure is your data really? If you are hosting sensitive data subject to the Norwegian Personal Data Act (Personopplysningsloven), you cannot afford to guess.

This guide cuts through the hype. We are going to look at how to harden LXC configurations, manage Linux Capabilities, and why running containers inside a KVM slice—like those we provide at CoolVDS—is the only sane choice for production environments in 2014.

The Shared Kernel Trap

The fundamental difference between a full virtual machine (like KVM or VMware) and a container (LXC/OpenVZ) is the kernel. In a container, you are sharing the host's kernel. If a process inside the container manages to trigger a kernel panic or exploit a kernel vulnerability, the entire host goes down. Worse, they might escape the chroot jail.

For development, this is fine. For production, especially if you are dealing with multi-tenant architectures, it is a risk. This is why at CoolVDS, we strictly use KVM (Kernel-based Virtual Machine) for our VPS instances. We give you a dedicated kernel. You can run Docker inside our KVM instances, giving you the best of both worlds: container agility with hardware-level isolation.

Hardening LXC: Dropping Capabilities

If you are running LXC manually (or via early Docker versions), the default configuration is often too permissive. The Linux kernel divides root privileges into distinct units called capabilities. A web server container does not need to load kernel modules or manipulate system time.

You need to explicitly drop these capabilities in your LXC config file (usually found in /var/lib/lxc/container-name/config). Here is the configuration I used for a client’s Nginx frontend just yesterday:

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0

# DROPPING CAPABILITIES
# Prevent loading kernel modules
lxc.cap.drop = sys_module
# Prevent raw I/O operations
lxc.cap.drop = sys_rawio
# Prevent system administration operations (dangerous)
lxc.cap.drop = sys_admin
# Prevent modifying MAC addresses
lxc.cap.drop = mac_admin
# Prevent system audit logging
lxc.cap.drop = audit_write

By stripping these, even if an attacker gets root inside the container, they cannot easily destroy the host system. It is a concept called Principle of Least Privilege.

Pro Tip: Ubuntu 14.04 LTS was released yesterday (April 17th). It includes significant improvements to AppArmor profiles for LXC. If you are still on 12.04, it is time to schedule a dist-upgrade. The new AppArmor profiles block writing to /proc/ and /sys/ by default.

Network Isolation with iptables

By default, containers can often talk to each other over the bridge interface. If one container is compromised, it can ARP spoof or sniff traffic from its neighbors. We need to lock this down using iptables on the host.

Here is a script to isolate a specific container interface (e.g., veth1234) so it can only talk to the gateway, not other containers:

#!/bin/bash
# Interface of the container
IFACE="veth1234"
# Bridge interface
BRIDGE="lxcbr0"

# Drop all forwarding by default for this interface
iptables -A FORWARD -i $IFACE -o $BRIDGE -j DROP

# Allow traffic to the outside world (Gateway)
iptables -I FORWARD -i $IFACE -o eth0 -j ACCEPT
iptables -I FORWARD -i eth0 -o $IFACE -j ACCEPT

# Block traffic to other containers on the same subnet
iptables -A FORWARD -i $IFACE -d 10.0.3.0/24 -j DROP

This ensures that if your WordPress container gets hit by a brute force attack, your database container remains unreachable via the internal network except through specific, allowed ports.

The Docker 0.10 Reality

Docker is moving fast. They recently introduced libcontainer to reduce dependency on LXC. However, the daemon still runs as root. This is the scary part. A bug in the Docker daemon could hand over the keys to the kingdom.

Until user namespaces are fully mature in the upstream kernel and Docker implements them, do not run Docker on a bare-metal server shared with other customers. This is why "Container Hosting" providers that use shared kernels are playing with fire.

The CoolVDS Architecture Strategy

For our clients in Norway who need to comply with Datatilsynet regulations regarding data integrity, we recommend this stack:

  1. Physical Host: High-end hardware with NVMe storage (still expensive, but worth it for I/O).
  2. Hypervisor: KVM. Provides hard memory and CPU separation.
  3. Guest OS (CoolVDS): Ubuntu 14.04 LTS or CentOS 6.5.
  4. Application Layer: Docker containers running inside the Guest OS.

This setup ensures that if a container breakout occurs, the attacker is trapped inside your VPS, not on our physical node. You get the developer ease of `docker run` without the security nightmare.

Compliance and Data Sovereignty

With the EU discussing major data protection reforms (what some are calling the "General Data Protection Regulation" draft), the location of your data is paramount. Hosting on US clouds puts you under the Patriot Act. Hosting on CoolVDS keeps your data in Oslo/Europe, governed by Norwegian law.

Latency matters too. If your user base is in Scandinavia, routing traffic through Frankfurt or London adds unnecessary milliseconds. Our direct peering at NIX (Norwegian Internet Exchange) ensures your packets stay local.

Performance check: I/O Scheduler

When running multiple containers doing heavy database writes, the default Linux I/O scheduler (CFQ) can choke. Inside your KVM instance, switch to deadline or noop if you are on our SSD-backed storage tiers.

# Check current scheduler
cat /sys/block/vda/queue/scheduler
# [cfq] deadline noop

# Switch to deadline for better container throughput
echo deadline > /sys/block/vda/queue/scheduler

Final Thoughts

Containerization is the future; there is no doubt about it. But in 2014, the security tools are still catching up to the deployment tools. Do not sacrifice security for convenience.

Build your infrastructure on a foundation that guarantees isolation. Whether you are patching OpenSSL or deploying the next big app, start with a solid, isolated KVM environment.

Ready to build a secure, container-ready cluster? Deploy a high-performance KVM instance on CoolVDS in under 55 seconds and lock down your infrastructure today.