Console Login

Docker in Production: Security Hardening for the Paralyzed DevOps

Docker in Production: Security Hardening for the Paralyzed DevOps

It has been a rough month for anyone managing infrastructure in Europe. On October 6th, the European Court of Justice invalidated the Safe Harbor agreement. If you are blindly dumping customer data into US-owned clouds, you are now legally exposed. Combine this with the fact that every developer suddenly wants to "Dockerize" everything because it works on their laptop, and we have a recipe for a security nightmare.

I love Docker. It solves dependency hell. But let’s be honest: the default configuration of the Docker daemon is not designed for hostile multi-tenant environments. In 2015, speed is king, but if you prioritize deployment velocity over isolation, you are going to have a bad time.

Here is the reality: Containers are not Virtual Machines. They are essentially fancy chroot environments sharing a single kernel. If you are running Docker on a cheap OpenVZ VPS (which many budget hosts in Norway still push), a kernel panic in one container can take down the entire node. Worse, a kernel exploit means the attacker owns the host.

This guide covers how to lock down Docker 1.8+ specifically for high-compliance environments, and why the underlying hardware virtualization matters more than your Dockerfile.

1. The "Root" of All Evil

By default, the Docker daemon runs as root. If a user breaks out of the container, they are root on your host. While user namespaces are being discussed in the community for future releases, right now in late 2015, we have to assume that root inside the container equals root on the host.

The first line of defense is ensuring your containers run with the absolute minimum privileges necessary. Do not just run everything as root because it's easier.

Drop Capabilities

The Linux kernel divides root privileges into distinct units called capabilities. A web server does not need to mess with the system clock (CAP_SYS_TIME) or load kernel modules (CAP_SYS_MODULE). Docker allows you to drop these.

In your run command, explicitly drop all capabilities and add back only what is needed:

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE -d nginx

This command ensures that even if an attacker gains shell access inside the container, they cannot modify network settings or mount filesystems.

2. Immutable Infrastructure: Read-Only Filesystems

If your application is stateless (and it should be), there is no reason for the container's root filesystem to be writable. Attacks often rely on downloading a payload or modifying a configuration file. If the filesystem is read-only, wget exploits hit a brick wall.

Run your production containers with the --read-only flag:

docker run --read-only -v /my/data:/data:rw my-app

This forces you to be disciplined about where you store persistent data (hint: use a volume mount), keeping the application binary layer pristine.

3. The Hardware Layer: OpenVZ vs. KVM

This is where most "cloud" setups fail. Many providers oversell resources using OpenVZ. In OpenVZ, all VPS instances share the same host kernel. You cannot tune kernel parameters (sysctl) securely for your specific container needs without affecting neighbors.

If you are running Docker, you need a KVM (Kernel-based Virtual Machine) VPS. KVM provides full hardware virtualization. Your VPS has its own kernel.

Pro Tip: Check your virtualization type. Run virt-what or look at uname -a. If you see "stab" or restrictions on loading modules, you are likely on OpenVZ. Migrate immediately.

At CoolVDS, we only deploy KVM instances. Why? Because when you are running a high-traffic Magento store or a Dockerized microservice architecture, you need to know that your memory is actually yours. No "burst" RAM marketing tricks. Plus, for Docker, KVM allows you to run the specific kernel version required by the latest Docker engine without waiting for the host provider to upgrade their node.

4. Network Isolation and the "Link" Legacy

We are seeing a shift away from the classic --link flag, but it remains the standard for now until overlay networking matures. However, simply linking containers exposes environment variables (including database passwords) to the linked container.

Instead of relying solely on links, use an ambasssador pattern or bind to localhost and tunnel if strictly necessary. If you are on a single host, ensure your Docker bridge (docker0) is not exposing ports to the public interface unless explicitly mapped.

Check your iptables to ensure Docker hasn't punched a hole you didn't expect:

iptables -L -n | grep DOCKER

5. Data Sovereignty and Performance

With Safe Harbor gone, keeping data within the EEA (European Economic Area) is critical. Norway, while not in the EU, is part of the EEA and has strict implementations of the Data Protection Directive via Datatilsynet.

However, encryption and security add overhead. This is where I/O becomes the bottleneck. Containers are lightweight, but if you have fifty containers trying to write logs simultaneously to a standard SATA drive, your iowait will skyrocket.

Benchmark: SATA vs NVMe

Storage Type Random Read IOPS Docker Container Boot Time (avg)
Standard VPS (SATA) ~300-500 2.1s
CoolVDS (NVMe) ~10,000+ 0.4s

We are one of the few providers in 2015 rolling out NVMe storage. When you are restarting a fleet of containers during a deploy, that difference between 2 seconds and 0.4 seconds adds up significantly.

Conclusion

Docker is transforming how we deploy, but it requires a shift in security mindset. You cannot rely on the default settings.

  1. Use KVM-based hosting to ensure kernel isolation.
  2. Drop capabilities and use read-only filesystems.
  3. Keep your data in Norway or the EEA to avoid the post-Safe Harbor legal mess.

If you are tired of noisy neighbors and need a platform that respects the raw I/O demands of containerized workloads, it’s time to upgrade.

Don't let slow I/O kill your innovative stack. Deploy a KVM instance on CoolVDS in 55 seconds and see the difference.