The Container Security Minefield: Why Your Docker Setup is Probably Vulnerable
It is July 2015, and the noise around Docker has reached a fever pitch. I walked into a meeting in Oslo last week where a development team proudly showed me their new "microservices" architecture. They had 15 containers running on a single bare-metal box, orchestrated with a fragile web of shell scripts. When I asked about security, the lead dev shrugged and said, "It's isolated, right? It's in a container."
I nearly flipped the table. Containers are not Virtual Machines.
If you are treating Docker like a lightweight VM, you are playing Russian Roulette with your infrastructure. I have spent the last decade fighting fires in server rooms from Bergen to Berlin, and I can tell you: the way most teams are deploying Docker right nowâpulling untrusted images and running them as rootâis a disaster waiting to happen.
Here is the cold, hard reality of container security in 2015, and how you can fix it before Datatilsynet (The Norwegian Data Protection Authority) comes knocking.
1. The Daemon is God (And That's Bad)
Here is the dirty secret: The Docker daemon runs as root. If a malicious userâor a compromised web app inside a containerâmanages to talk to the Docker socket (/var/run/docker.sock), they own the host. They don't just own the container; they own the kernel, the filesystem, and every other customer on that metal if you aren't isolated properly.
In a standard VPS environment based on OpenVZ (which many budget hosts still use), this is a nightmare. But this is why I architect exclusively on KVM (Kernel-based Virtual Machine), like the stack CoolVDS uses. With KVM, your containers run inside a true virtual machine with its own kernel. If a container panics the kernel, only your VM goes downânot the whole rack. It limits the "blast radius" effectively.
2. Drop Those Capabilities
By default, Docker grants a lot of Linux capabilities to a container that a web server simply doesn't need. Does your Nginx container need to modify network interfaces (NET_ADMIN)? No. Does it need to mount filesystems (SYS_ADMIN)? Absolutely not.
The most effective "pro move" you can make today is dropping all capabilities and adding back only what you need. Stop running containers with just docker run -d my-app. Do this instead:
docker run -d --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx
This strips the container of its superpowers. Even if an attacker exploits a vulnerability in the application code (like the nasty Shellshock bugs we saw last year), they will find themselves trapped in a box with no tools to break out.
3. The "Digital Supply Chain" is Poisoned
I see this in Dockerfile audits all the time:
FROM ubuntu:latest
Do you know what is in latest? Do you know who pushed it? The Docker Hub is the Wild West right now. Just because an image has 500 stars doesn't mean it hasn't been sitting unpatched for six months, riddled with OpenSSL vulnerabilities.
The Fix: Build your own base images. Start with a minimal OS like CoreOS or Alpine Linux (which is gaining traction fast). Scan them. If you must use public images, pin the version hash, never the tag.
4. Network Isolation: The "--link" Legacy
We are all waiting for the networking overhauls promised in future Docker versions (and the upcoming Kubernetes 1.0 looks promising for orchestration), but right now, many of you are using the legacy --link flag to connect database and web containers. Be careful.
When you link containers, you are often exposing environment variables containing passwords directly to the linked container. If your frontend gets popped, the attacker can just read the environment variables to get your MySQL root password.
Pro Tip: Don't pass secrets in environment variables. Mount them as read-only volumes from the host. It's clunky, but in 2015, it's the safest way to keep credentials out of docker inspect.
5. The Norwegian Context: Data Sovereignty
We are living in the post-Snowden era. European Safe Harbor agreements are looking shakier by the day (keep an eye on the Schrems case in the EU courts). If you are hosting data for Norwegian clientsâespecially health or financial dataâyou cannot rely on US-owned cloud giants where the Patriot Act applies.
This is where the infrastructure choice becomes a legal one. You need a provider with physical servers in Oslo, subject to Norwegian law. Using a CoolVDS instance in our Oslo data center ensures that your data doesn't accidentally traverse a transatlantic cable. Latency to the Norwegian Internet Exchange (NIX) is practically zero, which makes your API responses snappy, but more importantly, you stay compliant.
The Verdict: Hardening Checklist
Before you deploy that container to production this week, run through this list:
- Kernel Isolation: Are you running on bare metal or a shared kernel? Move to a KVM-based VPS (like CoolVDS) immediately to secure your kernel space.
- Read-Only Filesystems: Use
--read-onlyflag wherever possible. If an attacker can't write to disk, they can't download a rootkit. - Update the Host: A container is only as secure as the host kernel.
- Benchmark: Run the CIS Docker Benchmark (released back in May). It will hurt your feelings, but it will save your job.
Containerization is the future, but don't let the hype blind you to the risks. Secure your foundation first. If you need a sandbox that can actually withstand a kernel panic, spin up a CoolVDS NVMe instance. It takes 55 seconds, and it might just save you a 3 AM wake-up call.