Console Login

Container Security in the Post-Schrems II Era: Hardening Docker on Norwegian Infrastructure

Stop Trusting Default Configurations: A Survival Guide for Container Security

If you represent the "it works on my machine" crowd, you can stop reading now. This is for the engineers who wake up at 3 AM because a crypto-miner hijacked their Kubernetes cluster. I’ve spent the last decade cleaning up after "move fast and break things" deployments, and frankly, the state of container security in 2020 is alarming. We treat containers like lightweight VMs. They aren't. They are processes lying to themselves about how much access they have.

With the recent Schrems II ruling from the CJEU just two months ago, the stakes have changed. The EU-US Privacy Shield is dead. If you are piping customer data through a US-owned cloud provider's container service, you are walking a legal tightrope. This isn't just about `iptables` anymore; it's about data sovereignty. This is why we are seeing a massive migration of workloads back to strict jurisdictions like Norway.

1. The Root Cause (Literally)

The most common vulnerability I see in audits is running processes as root inside the container. By default, Docker runs as root. If an attacker breaks out of the container runtime—a scenario that happens more often than we'd like to admit (CVE-2019-5736 runc exploit wasn't that long ago)—they have root on your host node. Game over.

Pro Tip: Never rely on the default user. Create a specific user with a known UID/GID ensuring consistency across your fleet. This makes mapping permissions on persistent volumes significantly less painful.

Here is the absolute minimum standard for a production Dockerfile. If your build pipeline doesn't look like this, fix it today.

FROM alpine:3.12

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Tell docker that all future commands should run as the appuser user
USER appuser

WORKDIR /home/appuser

# Copy your binary
COPY --chown=appuser:appgroup my-binary .

CMD ["./my-binary"]

2. Drop Capabilities like they’re Hot

The Linux kernel divides the privileges traditionally associated with superuser into distinct units, known as capabilities. Most containers don't need to modify the kernel networking stack or change system time. Yet, by default, Docker grants a wide array of these capabilities.

We operate on a principle of least privilege. When you spin up a container, drop everything, then add back only what is strictly necessary. This significantly reduces the attack surface.

# The wrong way
docker run -d nginx

# The battle-hardened way
docker run -d \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --read-only \
  --tmpfs /var/cache/nginx \
  --tmpfs /var/run \
  nginx:1.19-alpine

Notice the --read-only flag? This mounts the container's root filesystem as read-only. If an attacker manages to execute a shell, they can't write a backdoor, they can't download a rootkit, and they can't modify config files. They are stuck in a frozen environment.

3. The Isolation Myth and the CoolVDS Advantage

Containers share the host kernel. This is great for efficiency but terrifying for isolation. If a kernel panic occurs inside a container, it can take down the entire host. This is known as the "Noisy Neighbor" effect on steroids.

This is where your choice of infrastructure provider becomes a security decision. At CoolVDS, we don't oversell resources, but more importantly, we strictly use KVM (Kernel-based Virtual Machine) hardware virtualization. Unlike OpenVZ or LXC-based VPS providers where you might share a kernel with a stranger, a KVM instance gives you your own dedicated kernel.

If you run Docker inside a CoolVDS NVMe instance, you have two layers of defense:

  1. The Container Runtime Isolation (Namespaces/Cgroups)
  2. The Hypervisor Isolation (KVM)

If an attacker escapes your container, they are still trapped inside your VPS, not roaming our bare metal infrastructure.

4. Network Policies: Don't Talk to Strangers

In a microservices architecture, why should your `frontend` container be able to talk to your `billing` database? It shouldn't. Yet, standard Docker networking (bridge mode) is often too permissive by default.

If you are orchestrating with Kubernetes (and let's be honest, by late 2020, who isn't?), you need to define NetworkPolicies. Deny all traffic by default, then whitelist specific paths.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Once you apply this, silence. Nothing moves. Then you explicitly allow traffic on port 443 or to your database backend. This mitigation strategy is essential for stopping lateral movement during a breach.

5. Supply Chain Security

You aren't writing all your code. You're pulling `node:14` or `postgres:12`. Do you know what's in those layers? Vulnerability scanning is mandatory. Tools like Trivy (which has matured significantly this year) should be part of your CI/CD pipeline.

# Scan your image before it ever hits production
trivy image --severity HIGH,CRITICAL coolvds/my-app:v1.0.0

If the scan returns critical CVEs, the build fails. Simple as that. We cannot deploy code with known exploits just because "management wants the feature live."

The Infrastructure Reality Check

All these configurations—immutable filesystems, capability drops, network policies—consume CPU cycles and I/O. When you enable aggressive logging for auditing (another requirement for GDPR/Datatilsynet compliance), your I/O wait times will spike on standard HDD or cheap SSD storage.

Security requires performance overhead. This is why we built CoolVDS on pure NVMe storage arrays. When your security agent is scanning files and your kernel is enforcing seccomp profiles, you need the I/O throughput to handle it without lagging your application.

Feature Standard VPS CoolVDS NVMe Instance
Virtualization Often Shared Kernel (Container-based) Full KVM Hardware Isolation
Storage Latency 5-10ms (SATA SSD) <0.5ms (NVMe)
Data Location Cloud/Unknown Norway (GDPR Compliant)

Final Thoughts

Security is not a product you buy; it's a state of mind. But where you host that state of mind matters. With the Schrems II ruling shaking up the legal landscape, and attackers becoming more sophisticated with container escapes, you need a foundation that is legally safe and technically robust.

Don't let a slow disk be the reason you disable security logging. Don't let a shared kernel be the reason a competitor sees your data.

Secure your stack today. Deploy a hardened KVM instance on CoolVDS in Oslo. You get the raw NVMe performance you need to run security tools without the lag. Spin up your secure instance now.