Container Security in 2014: Stop Handing Root Access to Your Host
Let’s be honest: 2014 has been the year of the container. If you have been to any meetup in Oslo or Bergen recently, someone is shouting about Docker. It is fast, it is portable, and it solves the "it works on my machine" dilemma. But as a sysadmin who has spent the last decade cleaning up after developers who chmod 777 everything, I am looking at this container craze with a very skeptical eye.
Here is the hard truth nobody in the hype cycle tells you: Containers are not virtual machines. They are just fancy process isolation. When you run a Docker container today (version 1.3.2 just dropped—update if you haven't), you are essentially running a process on the host kernel. If that process is root inside the container, and the kernel has a vulnerability, guess what? That user is root on your host server.
With the Shellshock bash vulnerability still fresh in our logs from September, relying on thin isolation layers is terrifying. If you are deploying containers in production without hardening them, you are not being "agile"; you are being negligent. Let's fix that.
The Kernel Shared State Problem
The fundamental difference between a container (LXC/Docker) and a Virtual Private Server (like the KVM instances we provision at CoolVDS) is the kernel. In a KVM environment, you have your own dedicated kernel. If your kernel panics, only your VM goes down. In a container, you share the host's kernel.
This matters for isolation. In the Linux 3.x kernels we are running on CentOS 7 and Ubuntu 14.04, cgroups and namespaces are robust, but not bulletproof. There is a reason Google uses containers but locks them down with tools we don't even have public access to yet.
Pro Tip: Never assume a container protects you from a malicious neighbor. If you are handling sensitive customer data under the Norwegian Personal Data Act (Personopplysningsloven), shared kernel hosting (like OpenVZ or shared Docker hosts) is a compliance risk. For strict data separation, always wrap your containers inside a dedicated KVM instance.
Hardening Docker 1.3+
If you must run Docker (and let's face it, the workflow is too good to ignore), you need to strip it of its superpowers. By default, Docker grants a massive subset of capabilities to the container. We need to drop those.
1. Drop Capabilities
Most web applications do not need to change network settings or mount filesystems. Yet, default containers can often do this. Use the --cap-drop flag to restrict the container's power. If you are running an Nginx frontend, it doesn't need NET_ADMIN.
# The wrong way (Default, insecure)
$ docker run -d -p 80:80 nginx
# The right way (Stripped capabilities)
$ docker run -d -p 80:80 --cap-drop=ALL --cap-add=NET_BIND_SERVICE --cap-add=SETUID --cap-add=SETGID nginx
This command drops ALL capabilities and only adds back the specific ones needed to bind port 80 and change user IDs.
2. Do Not Run SSH Inside Containers
I see this constantly. Developers treat containers like mini-VPSs and install openssh-server inside them. Stop it. This bloats the image and increases the attack surface. In 2014, with the release of Docker 1.3, we finally got docker exec. Use it.
# Deprecated/Old School (nsenter)
$ PID=$(docker inspect --format {{.State.Pid}} my_container)
$ nsenter --target $PID --mount --uts --ipc --net --pid
# The Modern Way (Docker 1.3+)
$ docker exec -it my_container bash
This gives you a shell inside the existing namespace without running a background daemon that needs patching.
Network Isolation with Iptables
Docker modifies iptables rules automatically to forward ports, which can sometimes bypass your UFW or existing firewall rules if you aren't careful. It binds to 0.0.0.0 by default, exposing your internal Redis or MySQL container to the entire internet if you publish the port.
Always bind to a specific interface, preferably a private network interface if you are on a CoolVDS setup with private networking enabled.
# Exposes to the world (Dangerous for databases)
$ docker run -d -p 3306:3306 mysql
# Exposes only to localhost (Better)
$ docker run -d -p 127.0.0.1:3306:3306 mysql
For more advanced isolation, you should manually manage the `DOCKER-USER` chain in iptables, but that is a topic for another day.
The Storage IO Challenge
Security is not just about hackers; it's about availability (the 'A' in the CIA triad). One of the biggest issues with containers on shared hosts is the "Noisy Neighbor" effect on disk I/O. If another container on the host decides to rebuild a massive log index, your database latency spikes.
This is where infrastructure choice dictates performance. At CoolVDS, we moved to pure SSD storage arrays this year. Spinning rust (HDD) just can't handle the random read/write patterns of fifty Docker containers fighting for IOPS.
| Feature | Container (Shared Host) | CoolVDS KVM |
|---|---|---|
| Kernel Isolation | Shared (Risky) | Dedicated (Secure) |
| I/O Performance | Unpredictable | Guaranteed/Dedicated |
| SELinux/AppArmor | Host Dependent | Fully Customizable |
The Verdict: Containment Needs Strong Walls
Containers are fantastic for deployment velocity. They make moving code from a developer's laptop to production incredibly smooth. But they are not security boundaries. In the eyes of the Linux kernel, a container is just a process with a slightly different view of the world.
If you are building for the Norwegian market, where reliability and data integrity are paramount, you need a foundation that forgives no errors. The ideal architecture in late 2014 is a hybrid: Run your Docker containers inside a hardened KVM VPS.
This gives you the workflow benefits of Docker with the hardware-level isolation of KVM. If a container breaks out, it is trapped inside your VM, not roaming free on the bare metal host. At CoolVDS, we provide the high-performance SSD VPS instances that make this nested architecture viable without killing your I/O.
Ready to harden your stack? Don't gamble with shared kernels. Deploy a KVM instance on CoolVDS today and build your container fleet on solid ground.