Docker Security in 2017: Why Your Container Strategy is a Ticking Time Bomb
It is December 2016. If you haven't patched CVE-2016-5195 (Dirty COW) yet, stop reading this and go update your kernel. I’ll wait.
Back? Good. That vulnerability was a wake-up call for the entire industry. It proved exactly what us paranoid sysadmins have been screaming since Docker 0.9: containers are not virtualization. They are process isolation. If the kernel leaks, the game is over. In a shared hosting environment, that means a neighbor running a vulnerable WordPress container could theoretically break out and access your financial data.
I’ve spent the last month auditing infrastructure for a fintech client here in Oslo. What I found was terrifying. Production environments running Docker daemons as root, images pulled blindly from Docker Hub, and zero network segregation. With the new EU data protection regulations looming on the horizon (yes, the GDPR text adopted this April), this negligence isn't just dangerous; it's a liability.
Here is how to lock down your container infrastructure before 2017 kicks in, and why we at CoolVDS strictly wrap containers inside KVM slices.
1. The Root Cause: User Namespaces
By default, the root user inside a Docker container is the same root user as the host machine. If a process breaks out of the container (via a kernel exploit like Dirty COW), it has root access to your server. That is a disaster scenario.
Docker 1.10 introduced User Namespaces, but I still see people disabling them because "it breaks volume permissions." Fix your permissions, don't disable security.
You need to map the container root to a non-privileged user on the host. In Ubuntu 16.04 (Xenial), you configure the Docker daemon to use the `userns-remap` option.
Configuration: /etc/docker/daemon.json
{
"userns-remap": "default",
"ipv6": false,
"icc": false,
"no-new-privileges": true
}
This forces Docker to create a namespace mapping. Even if an attacker breaks out, they find themselves as a `nobody` user on the host with zero privileges.
Pro Tip: Always mount your filesystems with `nosuid` if possible. It prevents set-user-identifier bits from taking effect, adding another layer of defense against privilege escalation. CoolVDS NVMe volumes support granular mount options exactly for this reason.
2. Immutable Infrastructure: Read-Only Containers
If your application gets hacked, the attacker will try to write a backdoor. Make that impossible. If your container doesn't need to write to disk, run it as read-only.
For a stateless Nginx frontend, your run command should look like this:
docker run --d -p 80:80 \
--read-only \
--tmpfs /run \
--tmpfs /tmp \
--cap-drop ALL \
--cap-add NET_BIND_SERVICE \
--name frontend \
nginx:alpine
Breakdown of flags:
--read-only: The container root filesystem is immutable.--tmpfs: In-memory storage for temporary files (PID files, logs) so the app doesn't crash.--cap-drop ALL: Drops all Linux capabilities (audit control, kernel module loading, etc.).--cap-add NET_BIND_SERVICE: Adds back only the ability to bind to port 80.
This reduces the attack surface by 90%. Even if they find an exploit in Nginx, they can't write a shell script to the disk.
3. The "Noisy Neighbor" & I/O Starvation
Security isn't just about hackers; it's about availability. In a containerized environment, one heavy container can starve others of I/O. I've seen Elasticsearch containers bring an entire host to its knees during re-indexing.
This is where hardware choice matters. Spinning rust (HDD) in 2016 is unacceptable for container workloads. The random I/O patterns of 50 containers will crush a SATA drive.
At CoolVDS, we use Pure NVMe Storage arrays. NVMe handles the high queue depths of parallel container operations without the latency spikes associated with SSDs over SATA. When you are serving an API to a client in Bergen, latency is the difference between a sale and a bounce.
4. Network Isolation & The Norwegian Context
Don't use the default `bridge0`. It allows ARP spoofing between containers. Create specific networks for specific services.
docker network create --driver bridge --subnet 172.18.0.0/16 backend_net
docker run --net backend_net ...
Furthermore, consider where your data physically lives. With the invalidation of the Safe Harbor agreement last year and the uncertainty of the Privacy Shield, storing data outside the EEA is risky. The Datatilsynet (Norwegian Data Protection Authority) is clear about data controller responsibilities.
Hosting on CoolVDS guarantees your data stays in our Oslo data center, protected by Norwegian privacy laws, not subject to foreign subpoenas.
5. The Ultimate Defense: KVM + Docker
Containers are great for deployment, but they are weak for isolation. This is why the "Bare Metal Container" trend is dangerous for multi-tenant setups.
The architecture we recommend for high-security environments is Nested Isolation:
| Layer | Technology | Purpose |
|---|---|---|
| L1: Hardware | CoolVDS Dedicated Core | Physical resource guarantee. |
| L2: Virtualization | KVM (Kernel-based Virtual Machine) | Hard kernel isolation. Your kernel is yours alone. |
| L3: Application | Docker 1.12+ | Dependency management and deployment speed. |
By running Docker inside a KVM VPS, you gain the portability of containers with the security boundary of a hypervisor. If your container kernel panics, only your VPS reboots—not the physical host. If a neighbor on the physical node gets DDoS'd, our hardware firewalls and KVM resource scheduler ensure your CPU cycles aren't stolen.
Conclusion
The tools to secure Docker exist in 2016, but they are not turned on by default. It requires effort. It requires configuring AppArmor profiles, remapping user namespaces, and choosing a hosting partner that understands the difference between "cheap" and "secure."
Don't wait for the next Dirty COW to realize your architecture is fragile.
Ready to harden your stack? Deploy a KVM-based, NVMe-powered instance on CoolVDS today. Total root control, low latency to NIX, and the stability your DevOps team deserves.