Console Login

Docker in Production: Security Survival Guide for the Paranoid Sysadmin (2015 Edition)

Docker in Production: Security Survival Guide for the Paranoid Sysadmin (2015 Edition)

Let’s be honest for a second. The recent ECJ ruling invalidating the Safe Harbor agreement (Schrems I) earlier this month has sent shockwaves through every IT department in Europe. If you are hosting customer data on US-controlled clouds right now, you aren't just taking a technical risk; you are taking a legal one. The Norwegian Data Protection Authority (Datatilsynet) is not known for its sense of humor regarding data sovereignty.

Combine this geopolitical mess with the explosive adoption of Docker in 2015, and we have a perfect storm. Developers love containers because they work on their laptops. As a Sysadmin, I look at Docker and I see a daemon running as root exposing a REST API that can own my entire host system. It’s terrifying.

I’ve spent the last six months migrating a high-traffic media streaming platform from bare metal to a containerized infrastructure. Here is the hard truth: containers do not contain. Not by default. If you simply run docker run -d my-app, you are one kernel exploit away from a total breach. Here is how we lock it down, keeping performance high and the auditors happy.

1. The "Root" of All Evil

By default, the process inside your container runs as root. If that process breaks out of the container (and there have been plenty of proof-of-concepts this year), the attacker is root on your host. Game over.

You must enforce a non-privileged user inside the Dockerfile. If you are building images, create a specific user.

# Inside your Dockerfile
RUN groupadd -r app && useradd -r -g app app
USER app

If you are running a third-party image that wasn't built well, override the user at runtime. Never trust the default.

docker run -u 1000:1000 -d nginx
Pro Tip: Do not map the Docker socket (/var/run/docker.sock) inside a container unless you absolutely know what you are doing (like running a CI agent). Giving a container access to the socket is the equivalent of giving it the root password to the host.

2. Kernel Capabilities: Drop 'Em Like It's Hot

The Linux kernel divides root privileges into distinct units called capabilities. A web server does not need to load kernel modules (CAP_SYS_MODULE) or change system time (CAP_SYS_TIME). Yet, Docker gives them these capabilities by default.

The most paranoid (and correct) approach is to drop all capabilities and add back only what is strictly necessary. This drastically reduces the attack surface.

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx

In our tests, this had zero impact on latency or throughput for our Nginx frontends, but it effectively neutralized several potential privilege escalation vectors.

3. The Isolation Lie: OpenVZ vs. KVM

This is where your choice of hosting provider becomes a security decision. Many budget VPS providers in Norway are still pushing OpenVZ (container-based virtualization). This is fine for a cheap personal blog, but it is suicide for Docker.

In OpenVZ, all VPS instances share the same host kernel. You cannot run a different kernel version. If you run Docker inside OpenVZ, you are effectively nesting containers. If a kernel panic happens in one container, it can destabilize the whole node. Furthermore, you often run into missing kernel modules (like bridge or veth) required for Docker networking.

You need a true Hypervisor.

We strictly deploy on KVM (Kernel-based Virtual Machine) infrastructures like CoolVDS. KVM provides hardware virtualization. Your VPS has its own kernel. If your container crashes your kernel, only your VM goes down, not the neighbors. More importantly, it provides a hard boundary between your data and other tenants. With the death of Safe Harbor, showing your clients that their data resides on an isolated KVM instance in Oslo—rather than a shared kernel soup—is a massive selling point.

Comparison: Docker Host Environments

Feature OpenVZ / LXC VPS KVM VPS (CoolVDS)
Kernel Isolation Shared (Weak) Dedicated (Strong)
Docker Compatibility Poor (Old kernels) Native (Run any kernel)
IOPS Performance Variable (Noisy neighbors) Consistent (Dedicated)
Security High Risk Enterprise Grade

4. Network Defense and the "World" Problem

When you bind a port using -p 8080:80, Docker modifies iptables to forward traffic from anywhere (0.0.0.0). I’ve seen developers accidentally expose internal administrative dashboards to the public internet because they forgot this detail.

If a service should only be accessed locally or by a reverse proxy on the same host, bind it to localhost specifically:

docker run -p 127.0.0.1:8080:80 -d my-internal-service

For cross-container communication without exposing ports, use Docker Links (legacy but stable) or the newer Docker Networking (available since Docker 1.9 is just around the corner, but --link is battle-tested in 1.8).

5. Immutable Infrastructure and Read-Only Filesystems

If your container gets hacked, the attacker will try to write a backdoor. Make their life miserable. Run your containers with a read-only filesystem whenever possible.

docker run --read-only -v /my/data:/data:rw my-app

This forces you to be disciplined about where you persist data (volumes only). It separates code from state, which is the holy grail of reliability.

Conclusion: Performance Meets Paranoia

Security is often a trade-off with convenience, but it shouldn't be a trade-off with performance. By stripping capabilities and enforcing read-only systems, we actually make our applications more predictable.

However, software hardening is useless if the foundation is shaky. Hosting Docker on legacy container virtualization is a risk I'm not willing to take in 2015. You need the hardware isolation of KVM combined with the speed of Pure SSD storage to handle the I/O tax of container layers.

When we build compliant infrastructure for Norwegian clients, we use CoolVDS not because it's flashy, but because it gives us raw KVM instances and low latency to NIX. It’s the closest thing to bare metal control without the procurement headache.

Stop running root containers on shared kernels. Lock it down.

Deploy a hardened KVM Instance on CoolVDS in Oslo today.