Console Login

Docker in Production: Taming the Security Beast Before It Bites

Docker in Production: Taming the Security Beast Before It Bites

Let’s be honest: the release of Docker 1.0 this past June changed everything. The ability to ship code in a consistent artifact from laptop to production is the holy grail we have been chasing since the early days of chroot jails. But as a Systems Architect watching the frenzy unfold, I am seeing some terrifying configurations out there. I recently audited a setup where a developer had mounted the host’s root filesystem into a container running a public-facing Nginx instance. One vulnerability, and that server is gone.

The reality is that containers in 2014 are not virtual machines. They are shared-kernel constructs using namespaces and cgroups. If the kernel panics, the ship goes down. If you are deploying containers on bare metal or shared hosting without a hypervisor layer, you are playing Russian Roulette with your data.

Here is how we lock down the new container ecosystem, specifically for those of us operating under the watchful eye of Norway’s Datatilsynet.

1. The "Shellshock" Wake-Up Call

September was a brutal month for sysadmins. The Bash vulnerability (CVE-2014-6271, aka Shellshock) proved exactly why container security is hard. We saw teams patch their host operating systems immediately, thinking they were safe. They forgot one critical detail: the containers contain their own userland.

If your Docker image is based on an old Ubuntu 12.04 or CentOS 6 build containing a vulnerable version of Bash, your patched host kernel won't save you. The attacker enters through the containerized application, exploits Bash inside the container, and if you haven't dropped capabilities, they might just break out.

The Fix: You must audit the packages inside your images. Do not rely on `latest`. Pin your versions and rebuild often.

# Check for Shellshock inside a running container
docker run -it ubuntu:14.04 bash -c "env x='() { :;}; echo vulnerable' bash -c 'echo test'"

# If it prints "vulnerable", rebuild your image immediately with:
RUN apt-get update && apt-get install --only-upgrade bash

2. Stop Running as Root (Seriously)

By default, the process inside a Docker container runs as root. Since containers share the kernel with the host, a root process inside the container effectively has root access to the kernel syscalls. While namespaces provide isolation, they are not bulletproof in 2014. There have been proof-of-concept exploits showing breakouts via `open_by_handle_at`.

Whenever possible, create a user inside your Dockerfile and switch to it.

# Dockerfile best practice
FROM debian:wheezy

# Create a non-root user
RUN groupadd -r myapp && useradd -r -g myapp myapp

# Set permissions
COPY . /src
RUN chown -R myapp:myapp /src

# Switch user
USER myapp

CMD ["python", "/src/app.py"]

If you absolutely must run a service that requires root (like some legacy daemons), drop the Linux capabilities you don't need. Docker 1.2+ gives us fine-grained control over this.

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE -d nginx

3. The "CoolVDS" Factor: Why KVM Matters

This brings us to the infrastructure layer. If you are running Docker on a cheap OpenVZ VPS, you are essentially nesting containers. OpenVZ relies on a shared kernel. Docker relies on a shared kernel. This "Inception" style layering is a nightmare for performance and security isolation. The noise from a neighbor’s container can starve your I/O.

At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine) virtualization. When you provision a VPS with us, you get a dedicated kernel. This means you can enable Docker-specific kernel modules, tweak `sysctl` settings for networking, and most importantly, if your container runtime crashes the kernel, it only affects your VM, not the physical node.

Pro Tip: For high-performance databases on Linux 3.10+ (CentOS 7), disable transparent huge pages (THP) on the host to prevent latency spikes in MongoDB or Redis containers.
# Check THP status
cat /sys/kernel/mm/transparent_hugepage/enabled

# Disable it at runtime if needed (add to rc.local to persist)
echo never > /sys/kernel/mm/transparent_hugepage/enabled

4. Network Isolation and IPC

By default, Docker containers can talk to each other on the bridge interface. In a multi-tenant microservices architecture, this implies trust that shouldn't exist. If one frontend container is compromised, it shouldn't be able to scan your internal database container unless explicitly allowed.

Since Docker 1.3 (released just this October), we have better linking, but you should start the daemon with Inter-Container Communication (ICC) disabled by default to force explicit linking.

Edit your Docker config (e.g., `/etc/default/docker` on Ubuntu):

DOCKER_OPTS="--icc=false --iptables=true"

Now, containers cannot communicate unless you link them. This forces you to define your architecture strictly.

5. Filesystem Security: Read-Only is King

Does your web application need to write to `/bin` or `/usr`? No. Then why is the filesystem writable? One of the most effective ways to neutralize an exploit is to mount the container's root filesystem as read-only.

You can then use data volumes for the specific directories that need write access (like logs or upload folders). This prevents an attacker from downloading a rootkit or modifying configuration files.

docker run --read-only \ 
  -v /var/log/myapp:/var/log/myapp \ 
  -v /tmp:/tmp \ 
  -d my-secure-app

Local Context: Latency and Jurisdiction

Here in Norway, we operate under strict privacy laws regarding personal data. While the EU Data Protection Directive (95/46/EC) sets the baseline, the Norwegian Personal Data Act is specific about how data is secured. Using a US-based cloud provider introduces legal gray areas regarding Safe Harbor.

Hosting on CoolVDS ensures your data stays within Norwegian borders, adhering to local jurisdiction. Furthermore, latency matters. If your users are in Oslo or Bergen, routing traffic through Frankfurt or London adds unnecessary milliseconds. Our local peering at NIX (Norwegian Internet Exchange) ensures that your Dockerized apps respond instantly.

Performance Tuning for 2014 Hardware

We are seeing a shift from spinning rust to SSDs. Docker image pulls can be I/O intensive, especially with the layered AUFS or DeviceMapper storage drivers. On standard SATA drives, doing a `docker build` can bring a server to its knees.

We configure our storage backend to handle the high IOPS required by container churn. If you are using DeviceMapper (common on CentOS 7), ensure you aren't using the loop-lvm mode in production, as it kills performance.

# Check your docker storage driver
docker info | grep "Storage Driver"

# If you see "devicemapper" with "Data loop file", 
# STOP. You need to configure direct-lvm for production.

Final Thoughts

The container revolution is here to stay, but the tools are still maturing. We are waiting on things like Kubernetes to stabilize, but for now, simple orchestration with tools like Fig (now Docker Compose) or configuration management via Ansible is the way to go.

Security is a process, not a product. Start with a solid foundation. Don't run containers on a shared-kernel VPS. Get a KVM slice, lock down your network, and patch your images.

Ready to deploy? Spin up a CoolVDS KVM instance in Oslo today. We offer pre-configured CentOS 7 and Ubuntu 14.04 templates ready for your Docker engine. Start your secure deployment now.