Container Security in 2014: Why Docker 1.0 Isn't a Sandbox (And How to Fix It)
Docker hit version 1.0 last month. The hype train has left the station, and every developer from Oslo to Trondheim is clamoring to ship code in containers. I get it. The portability is seductive. But as a systems architect who has spent the last decade watching servers bleed data due to bad configurations, I need you to pause.
Containers are not Virtual Machines.
If you treat a Docker container or an LXC instance like a dedicated VPS, you are going to get burned. In traditional virtualization (like the KVM instances we provision at CoolVDS), the hypervisor simulates hardware. The guest kernel is isolated. In containers, you are sharing the host kernel. If a process breaks out of a container in 2014, itβs root on your host node. Game over.
Here is how we lock this down. We aren't going to wait for the ecosystem to mature; we are going to secure it now using battle-tested Linux primitives.
1. The Myth of Root Isolation
By default, the user inside the container is the same user on the host kernel. Run top on your host, and you can often see the container processes running as root. This is terrifying.
Until user namespaces mature (they are in the kernel but implementation in Docker is still experimental/rough), you must assume that root inside = root outside. Do not run services as root inside your container files.
The Fix: User Switching
In your Dockerfile, stop being lazy. Create a specific user.
# Don't do this
# CMD ["/bin/my-app"]
# Do this instead
RUN groupadd -r app && useradd -r -g app app
USER app
CMD ["/bin/my-app"]
If you are using LXC directly, ensure you are mapping UIDs correctly in your config:
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
2. Mandatory Access Control (AppArmor & SELinux)
Since we share a kernel, we need a kernel-level babysitter. If an attacker exploits a vulnerability in OpenSSL (Heartbleed is still fresh in our minds) within your web container, we need to ensure they can't write to /proc or mount file systems.
On Ubuntu 14.04 LTS, AppArmor is your best friend. Docker ships with a default profile, but it is often too permissive for high-security Norwegian banking or healthcare apps.
Here is how you load a stricter profile. Do not disable this.
# Load the profile
cat > /etc/apparmor.d/containers/secure-profile <
profile secure-container flags=(attach_disconnected,mediate_deleted) {
#include
network inet tcp,
network inet udp,
network inet icmp,
deny network raw,
deny mount,
/proc/** r,
/sys/** r,
/usr/bin/** ixr,
/var/log/** w,
}
EOF
apparmor_parser -r -W /etc/apparmor.d/containers/secure-profile
Then apply it when launching:
docker run --security-opt apparmor:secure-container -d nginx
Pro Tip: If you are on CentOS 6.5 or the brand new CentOS 7, SELinux is the standard. I've seen too many admins run setenforce 0 because they don't understand contexts. Don't be that person. Learn `chcon`. It saves jobs.
3. Filesystem Limits and Quotas
A classic Denial of Service (DoS) attack against containerized infrastructure involves filling the disk. Since containers often share the host's filesystem (especially /var/lib/docker), one rogue log file can crash the entire server.
If you are using the Device Mapper storage backend (common in RHEL/CentOS), you can enforce strict limits. Ext4 makes this harder, which is why we at CoolVDS recommend placing your heavy I/O data directories on mounted volumes that map to separate block devices.
If you are managing this manually via iptables and quotas:
# Check your current storage driver
docker info | grep Storage
# If you are using devicemapper, ensure you set basesize at daemon start
# /etc/default/docker
DOCKER_OPTS="--storage-opt dm.basesize=10G"
4. Network Segmentation (The NIX Context)
Latency matters. Connecting to the Norwegian Internet Exchange (NIX) in Oslo, we want packets to fly. But the default Docker bridge (docker0) allows all containers to talk to each other. If your WordPress container gets hacked, it shouldn't be able to scan your internal Redis database container.
Disable inter-container communication by default in your daemon settings:
# Add this to your Docker daemon configuration
--icc=false
Then, explicitly link containers only when necessary. This whitelist approach is the only way to align with the strict data minimization principles found in the Personopplysningsloven (Personal Data Act) and the proposed EU General Data Protection Regulation (GDPR) currently being debated in Brussels.
5. The CoolVDS Architecture: KVM + Containers
This brings us to the architectural reality. Containers offer speed. Virtual Machines offer isolation.
At CoolVDS, we don't oversell you on "Bare Metal Containers" because the security models aren't there yet for multi-tenant environments. We use KVM (Kernel-based Virtual Machine).
| Feature | OpenVZ / LXC Hosting | CoolVDS (KVM) |
|---|---|---|
| Kernel | Shared with 100+ neighbors | Dedicated, private kernel |
| Security | One kernel panic kills everyone | Total isolation |
| Docker Support | Often impossible (modules missing) | Native, full support |
The smartest setup in 2014 is running Docker inside a CoolVDS KVM instance. You get the developer velocity of containers, but the hard security boundary of hardware virtualization. If your container kernel panics, your VPS reboots. Your neighbor's VPS doesn't even blink.
Furthermore, local compliance requires data residency. Our servers are physically located in Norway. When Datatilsynet asks where your encryption keys live, you can point to a server in Oslo, not a cloud vague-ware in Virginia.
Conclusion
Containerization is the future, but security is the present. Do not trade the integrity of your infrastructure for a faster deployment cycle. Lock down your user permissions, enforce AppArmor profiles, and isolate your container hosts using robust virtualization.
Ready to build a secure Docker fleet? Deploy a high-performance KVM instance on CoolVDS today. We offer pure NVMe storage and low-latency access to the Nordic backbone.