Hardening Docker 1.12: Lessons from the Trenches
If the Dirty COW (CVE-2016-5195) vulnerability disclosure last month didn't make you audit your infrastructure, you are sleeping at the wheel. We saw privilege escalation attacks go from "theoretical" to "script kiddy accessible" in under 48 hours. For those of us managing infrastructure in Norway, where reliability is less of a feature and more of a religion, this was a wake-up call.
Containers are fantastic. We use them. You use them. But let's stop pretending `docker run` is enough to secure a production environment. A container is effectively just a process with a mask on. If that mask slips—and without proper configuration, it will—your host machine is compromised.
This guide covers the specific steps we are taking right now, in November 2016, to harden Docker deployments on Ubuntu 16.04 LTS, and why the underlying virtualization technology of your VPS provider matters more than you think.
1. The "Shared Kernel" Trap
Here is the uncomfortable truth: Most cheap VPS providers in Europe are still selling you OpenVZ containers and calling them "servers." In an OpenVZ environment, you are sharing the kernel with every other tenant on that physical node. If a vulnerability like Dirty COW hits the kernel, isolation evaporates.
This is why at CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine). With KVM, your VPS has its own kernel. If you run Docker inside a CoolVDS instance and a container breaks out, it's trapped inside your VM, not roaming free on our physical hypervisor. This layer of hardware virtualization is your strongest defense against zero-day kernel exploits.
2. Drop Capabilities Like They’re Hot
By default, Docker grants a surprising number of Linux capabilities to a container. Does your Nginx container really need to change system time or load kernel modules? Absolutely not.
We adopt a whitelist approach: drop everything, then add back only what is strictly necessary. Here is how we launch a web frontend:
docker run -d \
--name frontend-app \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--cap-add=SETUID \
--cap-add=SETGID \
--security-opt=no-new-privileges \
nginx:1.10-alpine
The --security-opt=no-new-privileges flag is crucial. It prevents processes inside the container from gaining new privileges during execution (e.g., via setuid binaries). This feature was stabilized recently and should be standard in your playbooks.
3. Immutable Infrastructure: Read-Only Filesystems
If an attacker compromises your application, their first move is often to download a payload or modify a config file. Make that impossible. Run your containers with a read-only root filesystem.
Obviously, your app needs to write somewhere (logs, temp files). Use tmpfs or explicit volume mounts for that. Here is a practical example for a stateless API service:
docker run -d \
--read-only \
--tmpfs /run \
--tmpfs /tmp \
-v /var/log/app_logs:/app/logs:rw \
my-api:v2
Pro Tip: If you are using Docker Compose (version 2 syntax), you can define this in your YAML. It forces your developers to stop writing junk to the container's ephemeral storage layer, which bloats the storage driver and kills I/O performance.
4. Network Segmentation on Local Hosts
The default docker0 bridge is a free-for-all. Every container can talk to every other container. If you have a WordPress site and a separate internal analytics tool on the same host, and WordPress gets hacked, the attacker can port scan your internal tools.
Create dedicated user-defined networks. Docker's embedded DNS server (introduced back in 1.10) handles service discovery beautifully within these isolated scopes.
# Create isolated networks
docker network create --driver bridge frontend_net
docker network create --driver bridge backend_net
# Run the database only on the backend network
docker run -d --net=backend_net --name db mysql:5.7
# Run the app on both, acting as the bridge
docker run -d --net=frontend_net --name web app:latest
docker network connect backend_net web
With this setup, the database is completely unreachable from the public internet or any other container not explicitly attached to backend_net.
5. The Norwegian Context: Data Sovereignty
While the EU is still figuring out the mess left by the invalidation of Safe Harbor, and the new "Privacy Shield" framework is being viewed with skepticism by privacy advocates, data location matters. The Norwegian Data Protection Authority (Datatilsynet) is clear: you are responsible for where your data lives.
Latency is another factor. If your customer base is in Oslo, Bergen, or Trondheim, routing traffic through a data center in Frankfurt or Amsterdam adds 20-30ms of unnecessary round-trip time. While that sounds negligible, it compounds with every SSL handshake and database query.
Benchmarking Latency: Oslo vs. The World
We ran a simple ping test from a fiber connection in Oslo Sentrum to various providers:
| Provider Location | Avg Latency (ms) | Jitter |
|---|---|---|
| US East (Virginia) | 98ms | High |
| Europe (Frankfurt) | 32ms | Low |
| CoolVDS (Oslo) | 2ms | Negligible |
6. Keeping the Host Clean
The security of the container is irrelevant if the host OS is outdated. On our managed CoolVDS instances, we automate security updates, but if you are managing your own unmanaged VPS, you need to stay on top of kernel patches.
# Check your current kernel version
uname -sr
# Update Ubuntu 16.04 immediately if you haven't patched Dirty COW
sudo apt-get update && sudo apt-get dist-upgrade
# Check if a reboot is required
if [ -f /var/run/reboot-required ]; then
echo 'Reboot required!'
fi
Final Thoughts
Security in 2016 isn't about buying a "secure" product; it's about reducing the attack surface. By combining Docker's native security flags with the strict hardware isolation of KVM on CoolVDS, you create a defense-in-depth architecture that is compliant with Norwegian standards and tough enough for the modern web.
Don't wait for the next CVE to hit. Audit your docker run commands today.
Need a sandbox to test your hardened configurations? Deploy a high-performance NVMe KVM instance in Oslo on CoolVDS in under 55 seconds.