Container Security Post-Shellshock: Hardening Docker and LXC in Production
It has been exactly one month since the Shellshock bash vulnerability (CVE-2014-6271) woke every sysadmin in Europe up at 3:00 AM. If that panic taught us anything, it is that our isolation layers are thinner than we think. While the hype around Docker 1.2 and the brand new 1.3 release is deafening, most DevOps teams are deploying it with the security hygiene of a Windows 98 machine connected directly to the internet.
I have spent the last week auditing a client's infrastructure in Oslo. They were running financial microservices in Docker containers. As root. On bare metal. If a single process breaks out of that namespace, they own the host. They own the hardware. In the context of Norwegian banking compliance, that is not just negligence; it is a shutdown notice from Datatilsynet waiting to happen.
Let's cut through the noise. Containers are not Virtual Machines. They are fancy chroot environments on steroids. Here is how you secure them before you put actual traffic on them.
1. The "Root" of All Evil
By default, the Docker daemon requires root privileges. Worse, the processes inside the container often run as root. If I compromise your web server running inside a container, and I find a kernel exploit (like the recent ptrace bugs), I am root on your host server.
The Fix: Never run services as root inside the container. Create a specific user in your Dockerfile.
# Dockerfile best practice
FROM ubuntu:14.04
RUN groupadd -r app && useradd -r -g app app
WORKDIR /home/app
USER app
CMD ["python", "app.py"]
If you are using LXC (which hit version 1.0 earlier this year and is currently more stable for full-system containers), you should be using Unprivileged Containers. This maps the container's root user (UID 0) to an unprivileged user on the host (e.g., UID 100000). If they break out, they are nobody.
Edit your /etc/subuid and /etc/subgid on the host:
root:100000:65536
coolvds_user:165536:65536
2. Dropping Kernel Capabilities
Linux divides root privileges into distinct units called capabilities. A web server does not need to mount filesystems, load kernel modules, or change the system time. Yet, by default, we often give containers these powers.
With the latest Docker versions, we can—and must—drop everything we don't need. This is the concept of "Least Privilege" applied to the kernel.
# The paranoid mode: Drop everything, add back only what is needed
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE --cap-add=SETUID nginx
This command strips the container of all powers, then adds back only the ability to bind to a network port (like port 80) and set user IDs. If an attacker manages to inject shellcode, they will find themselves in a crippled environment unable to do damage.
3. Network Segmentation (Iptables is still King)
The default docker0 bridge allows all containers to talk to each other. If your WordPress container gets hacked, it shouldn't be able to scan your MySQL container or your internal Redis cache.
Until software-defined networking matures (projects like SocketPlane are interesting but experimental), we rely on good old iptables. On CoolVDS instances, we recommend disabling inter-container communication at the daemon level for public-facing hosts.
Edit /etc/default/docker on your Ubuntu 14.04 host:
DOCKER_OPTS="--icc=false --iptables=true"
Now, you must explicitly link containers if you want them to talk:
docker run --name db mysql
docker run --link db:db web_app
4. The Defense in Depth: Why KVM Matters
This is where architecture decisions impact your sleep schedule. There are two ways to sell VPS hosting in 2014: OpenVZ and KVM.
OpenVZ uses a shared kernel. If you run Docker inside OpenVZ, you are running a container inside a container, sharing the kernel with 50 other customers on that physical node. It is a security nightmare. If a kernel panic happens in one container, the whole node can go down. We call this the "Noisy Neighbor" effect, but for security, it's the "Nosy Neighbor."
Pro Tip: Never run production containers on OpenVZ. The kernel modules required for advanced packet filtering or AppArmor profiles are usually missing or restricted.
At CoolVDS, we strictly use KVM (Kernel-based Virtual Machine). Each VPS gets its own dedicated kernel. This provides a hard hardware virtualization boundary. Even if an attacker breaks out of your Docker container and exploits a kernel bug, they are trapped inside your KVM instance. They cannot touch the hypervisor or other clients. For data hosted in Norway, especially under strict NDAs or compliance rules, this layer is non-negotiable.
5. The Storage Factor: Read-Only is Safer
Immutability is a buzzword right now, but the concept is solid. If your application doesn't need to write to the disk, mount the filesystem as read-only. This prevents attackers from downloading backdoors or modifying configuration files.
docker run --read-only -v /var/run/docker.sock:/var/run/docker.sock ...
However, read-only containers can be buggy if the app expects to write logs. This is where high-performance I/O comes in. Offload logs to a mounted volume that maps to high-speed storage.
We recently benchmarked MySQL inside a container on standard SATA disks versus our CoolVDS NVMe arrays. The difference isn't just speed; it's stability. When the innodb_buffer_pool fills up, slow I/O causes the container to hang, which orchestration tools might interpret as a crash, leading to restart loops. Fast storage keeps the system responsive and secure.
Summary Checklist for Deployment
| Feature | Default Setting | Production Safe Setting |
|---|---|---|
| User | Root | Unprivileged User (UID > 1000) |
| Kernel Caps | All | Drop All, whitelist specific |
| Networking | --icc=true | --icc=false (Link explicitly) |
| Virtualization | OpenVZ / Bare Metal | KVM (CoolVDS) |
The container revolution is happening. But do not let the speed of deployment outpace your security policies. Use the tools Linux gives you—cgroups, namespaces, AppArmor, and SELinux—to build a fortress.
And when you need a host that supports the custom kernel modules and isolation required for hardened Docker setups, don't gamble with shared kernels. Deploy a KVM instance on CoolVDS today. Our data center in Oslo ensures your latency is low and your data stays within Norwegian legal jurisdiction.