You Are Probably Deploying Vulnerabilities
Let’s be honest. The hype around Docker right now is deafening. Everyone from small startups in Grünerløkka to enterprise giants is rushing to "containerize" everything. And I get it. The ability to package dependencies and ship consistent environments from dev to prod is powerful.
But there is a dirty secret in our industry right now: Most Docker deployments are wildly insecure.
I recently audited a setup for a client in Oslo. They were running a customer-facing dashboard in a container. It looked fine on the surface. But a quick check of the Dockerfile revealed they were running the application as root. Even worse, they hadn't limited kernel capabilities. If an attacker managed to exploit a vulnerability in their application code, they wouldn't just be inside the container—they’d have a straight shot at the host kernel.
In this post, we are going to fix that. We are going to look at how to lock down Docker 1.8, why the underlying architecture of your VPS matters more than you think, and how to keep Datatilsynet happy.
The "Root" of the Problem
By default, the Docker daemon runs as root. If you don't specify a user inside your container, the process inside runs as root too. In a container, "root" is effectively the same user ID (0) as the host machine. While Linux namespaces provide isolation, it is not a perfect sandbox yet. A kernel panic inside a container brings down the whole ship.
1. Create a Dedicated User
This is the lowest hanging fruit, yet 90% of the Dockerfiles I see ignore it. Stop running your node.js or Python apps as root. It takes two lines to fix this.
FROM ubuntu:14.04
RUN groupadd -r coolapp && useradd -r -g coolapp coolapp
USER coolapp
CMD ["./start.sh"]
By switching to a standard user, you massively reduce the blast radius if your application is compromised.
Kernel Capabilities: The Principle of Least Privilege
Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities. By default, Docker drops many of these, but it still leaves too many enabled for a web application.
Does your Nginx server really need SYS_CHROOT or MKNOD? Probably not.
The best practice right now is to drop all capabilities and only add back the specific ones you need. Here is how I start my containers:
docker run --d -p 80:80 \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--name web_server \
nginx:1.9
This command strips the container of all privileges, then adds back NET_BIND_SERVICE just so it can bind to port 80. If an attacker breaks in, they will find themselves in a straitjacket.
Infrastructure Matters: The Case for KVM
This is where your choice of hosting provider becomes a security decision.
Many budget VPS providers in Europe still use OpenVZ. In an OpenVZ environment, you are sharing the host kernel directly with other customers on the same physical server. You cannot load your own kernel modules, and you rely entirely on the host's kernel version. Docker on OpenVZ is often a hacky, unstable experience requiring old kernel versions (like 2.6.32) which are missing modern cgroup features.
This is a security risk you cannot afford.
Pro Tip: Always ask your provider about their virtualization technology. If it's not hardware-virtualized (like KVM, Xen, or VMware), walk away.
At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine) for our VPS Norway instances. This means your VPS has its own isolated kernel. If you run Docker on CoolVDS, you have two layers of defense:
- The Container isolation (Namespaces/Cgroups)
- The Hypervisor isolation (KVM)
This architecture is critical for compliance with the Norwegian Personal Data Act (Personopplysningsloven). You need to be able to prove to auditors that your data is segregated.
Data Sovereignty and Latency
Speaking of compliance, let's talk about where your bits actually live. With the Safe Harbor framework looking increasingly shaky (lots of chatter in legal circles right now regarding the Schrems case), keeping data within the EEA is smart. Keeping it in Norway is even better.
Hosting your container cluster on servers physically located in Oslo offers two advantages:
- Legal Safety: Your data falls under Norwegian jurisdiction, which has some of the strictest privacy protections in the world.
- Performance: Latency matters. Pinging a server in Frankfurt from Oslo takes ~25-30ms. Pinging a CoolVDS server in Oslo takes ~2ms. For database-heavy applications, that round-trip time accumulates.
Immutable Infrastructure
One final technique we are using more in 2015 is the concept of Read-Only Containers. If your application is stateless (which it should be), why give it write access to the filesystem?
docker run --read-only -v /tmp --name stateless_app my_image
This forces the container's root filesystem to be read-only. An attacker cannot modify binaries, download script kiddie tools, or edit config files. It forces you to be disciplined with your data persistence, utilizing Volumes for the data that actually matters.
Conclusion
Containers are the future, but don't let the convenience blind you to the risks. In 2015, the tooling is still young, and security is often an afterthought in the documentation.
Secure your users, drop your capabilities, and ensure your foundation is solid. Running secure containers on a shared-kernel VPS is like putting a steel door on a tent. You need the hardware isolation that KVM provides.
Ready to lock down your infrastructure? Deploy a KVM-based instance on CoolVDS today and get single-digit latency to your Norwegian customers.