Console Login

Container Security: Hardening Docker on Linux in the Post-Safe Harbor Era

Container Security: Hardening Docker on Linux in the Post-Safe Harbor Era

Let's be honest: docker run -d -p 80:80 nginx is fun. It's fast. It works on your laptop. But running that command on a production server in 2016 is negligent.

The hype cycle around containers is deafening right now. Every startup in Oslo is tearing down their monoliths for microservices. But as someone who has spent the last decade cleaning up compromised servers, I see a pattern emerging. Developers are shipping containers with root privileges, mounting sensitive host directories, and pulling images from untrusted sources. With the recent invalidation of the Safe Harbor agreement by the ECJ (Schrems I), the legal landscape for data hosting in Europe has shifted. If your container security is lax and you leak user data, you aren't just facing downtime; you are facing the wrath of Datatilsynet.

Containerization is not virtualization. It is process isolation. If you don't understand the difference, you are dangerous. Here is how to lock down your stack using tools available today.

1. The Root Problem (Literally)

By default, the process inside a Docker container runs as root. If an attacker exploits a vulnerability in your application (say, a buffer overflow in an old glibc), and breaks out of the container, they are root on your host machine. Game over.

We are seeing promise in Docker 1.10 (currently RC) with User Namespaces, which maps the root user inside the container to an unprivileged user on the host. But until 1.10 hits stable global adoption, you must stop being lazy with your Dockerfiles.

The Fix: Create a User

Stop letting your daemon run as UID 0. Create a specific user for your application.

FROM ubuntu:14.04

# Install dependencies
RUN apt-get update && apt-get install -y nginx

# Create a system group and user
RUN groupadd -r appsec && useradd -r -g appsec appsec

# Fix permissions so the non-root user can bind/write
RUN chown -R appsec:appsec /var/www/html

# Switch context
USER appsec

CMD ["nginx", "-g", "daemon off;"]

When you build and run this, top on the host will show the process running as appsec, not root. This simple step mitigates 90% of breakout scenarios.

2. Drop Kernel Capabilities

Linux divides root privileges into distinct units called capabilities. Does your Node.js web server need to change the system time? No. Does it need to load kernel modules? Absolutely not.

Docker grants a broad set of capabilities by default. You should strip them all and add back only what is necessary. This is the principle of least privilege applied to the kernel.

Use the --cap-drop and --cap-add flags. For a typical web worker, you can drop almost everything.

docker run --d -p 8080:8080 \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --name secure-app \
  my-company/web-app:v1

If you don't know what capabilities your app needs, run it manually and check the audit logs. Ideally, in a high-security environment, you are also applying a custom AppArmor profile or SELinux context.

3. Isolation Matters: The Case for KVM

This is where infrastructure choice becomes a security decision. Many "cheap" VPS providers still use OpenVZ or LXC to oversell resources. In those environments, you are sharing a kernel with every other customer on that physical node. If their container crashes the kernel, your service goes down.

If you are running containers, you need a hard boundary. That boundary is the Hypervisor.

Pro Tip: Never run production Docker workloads on OpenVZ. The kernel version is often ancient (2.6.32), and Docker requires features found in 3.10+ for proper cgroup management and OverlayFS performance.

At CoolVDS, we strictly use KVM (Kernel-based Virtual Machine). This gives you a dedicated kernel. You can enable specific kernel modules, tune sysctl parameters for high-load networking, and most importantly, if a neighbor has a meltdown, your instance remains untouched. For our Norwegian clients dealing with strict SLA requirements, this isolation is mandatory.

4. Secure Your Registry and Content Trust

In 2016, pulling FROM java:8 is like accepting candy from a stranger. Do you know who maintains that image? Has it been patched against the latest OpenSSL vulnerabilities?

Docker 1.8 introduced Docker Content Trust. It uses Notary to sign images. Enabling it ensures that you only run images that have been signed by a trusted publisher.

export DOCKER_CONTENT_TRUST=1
docker pull myregistry.com/my-image:signed

If the image signatures don't match, the pull fails. This prevents "Man in the Middle" attacks where a malicious actor injects compromised code between the Hub and your server.

5. Filesystem Performance and Security

Security is also about availability. A common DoS vector is I/O saturation. The default Device Mapper storage driver on CentOS 7 can be sluggish. We strongly recommend using OverlayFS (specifically Overlay2 as it matures) or Btrfs if you are on a kernel that supports it well.

However, these storage drivers are I/O intensive. On spinning rust (HDD), simple image builds can bring a server to its knees, causing timeouts that look like outages.

Feature HDD (Standard VPS) NVMe (CoolVDS)
Docker Pull (Ubuntu Image) ~15-20 Seconds ~2-3 Seconds
Container Startup Variable latency Instant
IOPS Consistency Low / Noisy Neighbors High / Dedicated

We engineered CoolVDS with local NVMe storage specifically to handle the high IOPS demands of modern container lifecycles. When you are re-deploying 50 microservices during a CI/CD pipeline run, that speed difference isn't a luxury; it's the difference between a 2-minute deploy and a 20-minute deploy.

Final Thoughts: Data Sovereignty

The tech is only half the battle. Since the invalidation of Safe Harbor last year, many Norwegian companies are panicked about where their data physically sits. US-owned clouds are under scrutiny.

By hosting on CoolVDS, your data resides in Oslo. You benefit from low latency to the NIX (Norwegian Internet Exchange) and compliance with EU Data Protection Directive 95/46/EC. You get the raw power of NVMe and KVM, without the legal headache of cross-Atlantic data transfers.

Don't let a default config ruin your uptime. Lock down your user, drop your capabilities, and put your containers on a hypervisor that respects your need for isolation.

Ready to harden your infrastructure? Deploy a KVM instance on CoolVDS today and experience the stability of true isolation.