Container Security in a Post-Schrems II World: Hardening Docker for Production
Letβs be honest: your Dockerfile is probably a liability. Iβve audited enough clusters across Oslo and Bergen to see the same pattern repeated ad nauseam. Developers pull a heavy base image, run everything as root, mount the docker socket, and then wonder why their infrastructure gets compromised by a script kiddie running a crypto-miner.
The reality of August 2020 is harsh. Microservices have solved our scaling headaches but introduced a massive surface area for attacks. And with the CJEU (Court of Justice of the European Union) recently striking down the Privacy Shield in the Schrems II ruling, where you host your data is just as critical as how you secure it.
I'm not here to talk about theory. I'm here to show you how to lock down your containers before Datatilsynet comes knocking or your database gets leaked.
1. The "Root" of All Evil
By default, Docker containers run as root. This is convenient for development but catastrophic for production. If an attacker compromises a process running as root inside a container, and then manages a container breakout (via a kernel vulnerability), they have root access to your host. Game over.
You need to enforce the principle of least privilege immediately. Create a specific user in your Dockerfile.
# The Wrong Way
FROM node:12
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
# The Hardened Way
FROM node:14.8.0-alpine3.12
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY . .
# Change ownership to the new user
RUN chown -R appuser:appgroup /app
USER appuser
CMD ["node", "index.js"]
Pro Tip: Never use thelatesttag in production.node:latestchanges under your feet. Pin your versions (e.g.,node:14.8.0-alpine3.12) to ensure your builds are reproducible and you aren't silently inheriting new vulnerabilities.
2. Capabilities: Drop 'Em Like It's Hot
Even if you aren't root, the Linux kernel grants capabilities to processes. Docker gives your container a default set that is far too generous for a simple web server. Does your Nginx instance need NET_RAW to craft raw packets? Does it need SYS_CHROOT? No.
The most secure approach is to drop ALL capabilities and add back only what is strictly necessary. This significantly limits what an attacker can do even if they gain code execution.
docker run --d \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--cap-add=SETUID \
--read-only \
--tmpfs /tmp \
my-hardened-app
In this example, we also use the --read-only flag. This forces the container's root filesystem to be read-only. If an attacker tries to download a malicious script or modify a binary, the operation fails immediately. We mount a tmpfs at /tmp because many applications (and the OS) still need a scratchpad.
3. The Kernel Isolation Problem (And Why KVM Matters)
Here is the uncomfortable truth about containerization: Containers share the host kernel.
If there is a vulnerability in the Linux kernel versions 4.x or 5.x used by your host, every single container is at risk. This is where the underlying infrastructure becomes your first line of defense. In shared hosting environments or budget VPS providers using OpenVZ or LXC, you are often sharing a kernel with noisy, potentially compromised neighbors.
This is why serious DevOps teams in Europe prefer full hardware virtualization. At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine). Each VPS gets its own isolated kernel.
| Feature | Container/LXC Hosting | CoolVDS (KVM) |
|---|---|---|
| Kernel Isolation | Shared (High Risk) | Dedicated (High Security) |
| Resource Allocation | Often Oversold | Guaranteed RAM/CPU |
| Custom Modules | Restricted | Full Control (Load any module) |
If you are running a Kubernetes cluster, running the worker nodes on KVM-backed instances ensures that a kernel panic in one node doesn't take down the entire physical hypervisor.
4. Network Segmentation and the Loopback Trap
Stop publishing ports to 0.0.0.0 unless you absolutely have to. When you run -p 8080:8080, Docker modifies iptables to allow traffic from the outside world directly to that container, often bypassing your UFW (Uncomplicated Firewall) rules if you aren't careful.
If you are using a reverse proxy (like Nginx or Traefik) on the same host, bind your application services to localhost only:
# Only accessible from the host itself
docker run -p 127.0.0.1:8080:8080 my-app
For multi-host setups, overlay networks are great, but ensure encryption is enabled. If you are managing your own cluster on CoolVDS, utilizing the private networking interface (standard on our NVMe plans) keeps your inter-service traffic off the public internet, reducing latency and exposure.
5. The Schrems II & GDPR Reality Check
The July 2020 Schrems II ruling has thrown the industry into chaos. The EU-US Privacy Shield is dead. Transferring personal data to US-controlled cloud providers is now legally risky, as US surveillance laws (FISA 702) conflict with GDPR rights.
Technical security means nothing if your legal footing is crumbling. Hosting your containers on Norwegian soil, protected by Norwegian privacy laws and the EEA agreement, is no longer just a "nice to have"βit's a compliance necessity for many businesses handling EU citizen data.
Hardening Kubernetes with PodSecurityPolicy
If you are orchestrating with Kubernetes (1.18 is the current stable choice), you must implement PodSecurityPolicy (PSP). Don't let developers deploy privileged pods.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
# Do not allow changing ownership of files
fsGroup:
rule: RunAsAny
# Drop all capabilities by default
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'configMap'
- 'emptyDir'
- 'secret'
- 'persistentVolumeClaim'
Applying this policy ensures that even if a developer tries to deploy a container requesting privileged: true, the API server will reject it. It enforces discipline.
Final Thoughts
Security isn't a product; it's a process of layer management. You harden the code, you harden the container image, you harden the runtime, and crucially, you harden the infrastructure it lives on.
Don't let a shared kernel be your single point of failure. Whether you are running a single Docker host or a complex K8s cluster, starting with a secure, isolated foundation is paramount (sorry, I meant critical).
Need a compliant, high-performance environment for your container workload? Deploy a KVM-based NVMe instance in Oslo with CoolVDS today. Full root access, total kernel isolation, and zero noisy neighbors.