Console Login

Container Security is a Lie (Unless You Do This): Hardening Docker for GDPR in 2021

Stop Trusting Default Docker Settings: A Survival Guide for 2021

Let’s be brutally honest: putting your application in a container does not make it secure. It just packages the insecurity into a convenient, portable tarball. I’ve spent the better part of 2020 cleaning up compromised clusters where developers assumed that docker run was a magic shield. It isn't. Containers are just processes sharing a kernel. If that kernel is exposed, or if your container runs as root (which, by default, it does), you are one libc exploit away from a total breach.

With the recent ruling on Schrems II invalidating the Privacy Shield, hosting data outside of Europe has become a legal minefield. For those of us operating in Norway, relying on US-based cloud hyperscalers is becoming a compliance headache. Data sovereignty isn't just a buzzword anymore; it's a requirement from Datatilsynet.

Here is how we harden container infrastructure at the metal level, using techniques that saved my sanity during a recent high-traffic deployment for a major Oslo e-commerce client.

1. The Root Problem (Literally)

The single most common mistake I see is running processes as root inside the container. By default, Docker maps the container's root user to the host's root user. If an attacker breaks out of the container runtime—remember CVE-2019-5736?—they own your server. They own your data. They own you.

Stop doing this. Enforce unprivileged users in your Dockerfiles.

The Vulnerable Way:

FROM node:14
WORKDIR /app
COPY . .
CMD ["node", "index.js"]

The Hardened Way:

FROM node:14-alpine

# Create a specific group and user
RUN addgroup -S coolvds_user && adduser -S coolvds_user -G coolvds_user

WORKDIR /app

# Chown only what is strictly necessary
COPY --chown=coolvds_user:coolvds_user . .

# Switch context
USER coolvds_user

CMD ["node", "index.js"]
Pro Tip: Even better, map user namespaces on the Docker daemon level. This maps the container root to a high-number non-privileged user on the host OS. Add {"userns-remap": "default"} to your /etc/docker/daemon.json.

2. Read-Only Filesystems are Non-Negotiable

In a production environment, your containers should be immutable. An attacker cannot install a crypto-miner or a backdoor if they cannot write to the disk. I force my teams to run containers with a read-only root filesystem. If the application needs to write logs or temp files, mount a specific tmpfs or a volume. Do not give write access to the whole OS.

Here is how you execute this policy:

docker run --read-only \
  --tmpfs /run \
  --tmpfs /tmp \
  -v /var/log/app:/app/logs:rw \
  my-secure-image

If your app crashes because it can't write to /etc/, good. It shouldn't be writing there anyway. This forces discipline in your application architecture.

3. Resource Limits: The "Noisy Neighbor" Defense

In 2020, we saw a massive spike in DDoS attacks targeting application logic rather than just bandwidth. If a container gets hit and spirals out of control, it can starve the host OS of CPU cycles or memory, crashing everything else on that node (the dreaded OOM killer).

Never deploy without limits. It's reckless.

version: '3.8'
services:
  web_app:
    image: nginx:alpine
    deploy:
      resources:
        limits:
          cpus: '0.50'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M

However, software limits have overhead. This is where infrastructure choice matters. Containers share the host kernel. If the kernel panics, the game is over. This is why for mission-critical workloads—especially databases or payment gateways—I don't trust shared-kernel containerization alone.

I use CoolVDS for these layers. Why? Because CoolVDS is built on KVM (Kernel-based Virtual Machine). Each instance has its own isolated kernel. Even if a container inside your VM goes rogue, it hits the hard wall of hardware virtualization. It cannot leak into my other workloads. Plus, their NVMe storage arrays in Oslo ensure that the I/O overhead of virtualization is negligible. You get the security of a dedicated server with the flexibility of a VPS.

4. Network Isolation and the "Oslo Ping"

If you have a database container, why is it allowed to talk to the public internet? It shouldn't. Use internal Docker networks or Kubernetes NetworkPolicies to whitelist traffic.

But let's talk about the physical network. Latency is a security feature. If your data center is in Frankfurt but your customers are in Bergen, you are adding latency that impacts handshake times for SSL/TLS. Longer handshakes mean more open connections, which means higher susceptibility to slow-loris attacks.

Location Latency to Oslo (Avg) Data Jurisdiction
US East (Virginia) ~95ms USA (CLOUD Act)
Frankfurt ~25ms Germany (GDPR)
CoolVDS (Oslo) ~2ms Norway (GDPR + Local)

Hosting locally in Norway on CoolVDS isn't just about speed; it's about keeping traffic within the NIX (Norwegian Internet Exchange) as much as possible, reducing the hops where traffic could be intercepted or analyzed.

5. Supply Chain Security (The Image Trap)

You pull FROM python:3. Do you know what's in that base image? Do you know who maintains it? Docker Hub is full of unverified images. In 2020, we utilize tools like Trivy or Clair to scan images before they ever touch production.

Here is a simple CI pipeline step you should be running:

# Scan your image for High/Critical severities
trivy image --severity HIGH,CRITICAL --exit-code 1 my-app:latest

If this command fails, the build stops. No questions asked.

Conclusion: Paranoia is a Virtue

Security is not a product; it is a process. It is the sum of small, annoying decisions like setting read-only flags, scanning images, and choosing the right hosting partner. Schrems II has made it clear: you are responsible for where your data lives.

Don't gamble your reputation on a default configuration. Harden your containers, restrict your networks, and place your infrastructure on a platform that respects isolation.

Ready to lock down your stack? Deploy a KVM-isolated NVMe instance on CoolVDS today and get single-digit latency to your Norwegian users.