Console Login

Container Breakouts Are Real: Hardening Docker and Kubernetes for the Paranoid

Stop Trusting Default Configurations

It is July 2022. The Log4Shell nightmares from last winter are still fresh, and yet I still see Senior Engineers deploying containers running as root. It is frankly negligent. We treat containers like lightweight Virtual Machines, but they are not. They are processes with a fancy worldview. If you peel back the namespaces and cgroups, that process is running on the host kernel.

If you are hosting mission-critical applications—especially here in Norway where the Datatilsynet is (rightfully) watching your GDPR compliance like a hawk—reliance on default Docker settings is a liability. I have spent the last decade cleaning up compromised clusters, and the pattern is always the same: lazy configuration leads to privilege escalation.

1. The Root Problem (Literally)

By default, a process inside a Docker container runs as PID 1 with root privileges. If an attacker exploits a vulnerability in your application (say, a Node.js remote code execution), they are now root inside the container. If they manage a container breakout—via a kernel exploit like Dirty Pipe (CVE-2022-0847) which surfaced earlier this year—they are root on your host node.

The Fix: Create a specific user. Never run as root.

# The WRONG way
FROM node:16-alpine
WORKDIR /app
COPY . .
CMD ["node", "index.js"]

# The RIGHT way
FROM node:16-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY . .
# Change ownership
RUN chown -R appuser:appgroup /app
USER appuser
CMD ["node", "index.js"]

2. Immutability is Your Friend

If an attacker gets in, their first move is usually to download a payload (crypto miner, reverse shell script) or modify a configuration file. Make their life miserable by making the container filesystem Read-Only.

This forces you to be disciplined about where you write data. Logs go to STDOUT/STDERR. Temporary files go to /tmp (mounted as a tmpfs). Persistent data goes to a volume. Nothing gets written to the container layer.

Here is how you enforce this at runtime:

docker run --read-only \
  --tmpfs /tmp \
  --tmpfs /run \
  -v my-data:/var/lib/app/data \
  my-secure-image
Pro Tip: When moving to Kubernetes, use the securityContext. In Kubernetes 1.24 (released recently), Pod Security Policy is deprecated, so you should be looking at Pod Security Standards or OPA Gatekeeper. But for a quick win in your deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: secure-app
spec:
  template:
    spec:
      containers:
      - name: app
        image: my-registry/app:v1.2
        securityContext:
          readOnlyRootFilesystem: true
          allowPrivilegeEscalation: false
          runAsNonRoot: true
          runAsUser: 1000

3. Capabilities: Drop 'Em All

Linux capabilities break down the power of root into distinct units. Does your Nginx web server need CAP_SYS_ADMIN (essentially root)? No. Does it need CAP_NET_BIND_SERVICE to bind to port 80? Yes. But if you bind to port 8080, you might not even need that.

The most secure stance is to drop ALL capabilities and add back only what is strictly necessary. This is a "deny by default" strategy.

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE my-web-server

4. The Infrastructure Layer: Why KVM Matters

You can harden your Dockerfile all day, but if the underlying virtualization technology is weak, you are building a castle on a swamp. This is where the distinction between "cheap VPS" and professional infrastructure becomes violent.

Many budget providers use container-based virtualization (like OpenVZ or LXC) to oversell resources. In those environments, your "server" is just a container sharing a kernel with 50 other noisy neighbors. If one of them triggers a kernel panic, you go down. If one of them hits a kernel exploit, your data is exposed.

This is why at CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine). With KVM, your environment has its own isolated kernel. It is hardware virtualization. Even if you are running a vulnerable Docker container inside your CoolVDS instance, the blast radius is contained to your VM. You are isolated from other tenants on the physical hypervisor.

Feature Container VPS (LXC/OpenVZ) CoolVDS (KVM)
Kernel Isolation Shared (Weak) Dedicated (Strong)
Docker Support Often limited/hacky Native, full control
Neighbor Risk High Near Zero

5. Supply Chain Scanning

In 2022, you cannot assume the image on Docker Hub is clean. It might contain vulnerabilities from 2019. Before you deploy anything to your CoolVDS production node, scan it. I prefer Trivy by Aqua Security because it is fast, simple, and integrates easily into CI/CD.

Don't just scan the OS packages; scan the language dependencies (npm, pip, maven) too.

# Install Trivy (Debian/Ubuntu)
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy

# Scan an image
trivy image python:3.4-alpine

If you see High or Critical vulnerabilities, do not deploy. It is that simple. Upgrading a base image takes 5 minutes; recovering from a breach takes months.

6. Local Compliance: The Norwegian Context

Since the Schrems II ruling, transferring personal data to US-controlled clouds has been a legal minefield. If your containers are processing personal data (PII) of Norwegian citizens, latency isn't your only concern—sovereignty is.

Hosting on CoolVDS ensures your data resides physically in Europe, on infrastructure governed by European law. We see many DevOps teams migrating workloads from AWS/GCP to our NVMe instances specifically to simplify their GDPR compliance posture. Plus, the latency from Oslo to our data centers is negligible—often under 10ms.

Final Thoughts

Security is not a product; it is a process. Start with a secure foundation (KVM on CoolVDS), strip privileges from your containers, and scan everything before it touches production. The threats are evolving—just look at the rise of supply chain attacks this year—but the principles of least privilege and isolation remain constant.

Need a sandbox to test your hardened configurations? Deploy a CoolVDS NVMe instance today. It’s fast, it’s isolated, and it won’t wake you up at 3 AM.