Console Login

Surviving the Container Wild West: Hardening Strategies for Post-Schrems II Infrastructure

Surviving the Container Wild West: Security Primitives for Post-Schrems II Deployment

I watched a staging cluster turn into a Monero mining rig last Tuesday. It wasn't a zero-day exploit or a sophisticated state-sponsored attack. It was a default Docker configuration, an exposed API port, and a developer who thought privileged: true was a valid debugging strategy. The server melted. The CPU steal went through the roof. The client was furious.

In the wake of the EU Court of Justice's Schrems II ruling this July, the stakes have shifted. Data sovereignty isn't just a compliance checkbox anymore; it's a liability minefield. If you are hosting containers that process Norwegian citizen data, and those containers are porous, you aren't just risking downtime. You are inviting Datatilsynet (The Norwegian Data Protection Authority) to your boardroom.

Containers are not virtual machines. They are processes with a fancy view of the kernel. If you treat them like VMs, you will get burned. Here is how we lock them down on CoolVDS infrastructure, ensuring that high-performance NVMe storage doesn't become a high-speed lane for exfiltration.

1. The Root Problem (Literally)

By default, the process inside the container runs as root. If an attacker breaks out of the container—explaining the runc vulnerability (CVE-2019-5736) is a story for another day—they are root on your host. Game over.

Stop writing Dockerfiles that end with a CMD running as ID 0. Create a specific user. It adds three lines to your build and saves you years of therapy.

Code Example: The "User 1001" Standard

This is the bare minimum for any service running on our VPS Norway instances:

FROM alpine:3.12
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
WORKDIR /home/appuser
COPY --chown=appuser:appgroup . .
ENTRYPOINT ["./my-binary"]

2. Linux Capabilities: Slice the Kernel Privileges

Linux splits root privileges into distinct units called capabilities. A web server needs to bind to a port (maybe) and write to a log file. It does not need to load kernel modules, manipulate the system clock, or trace processes. Yet, Docker gives it almost everything by default.

The "Battle-Hardened" approach is to drop everything and add back only what is strictly necessary. This is known as the Whitelist Approach.

Command: Draconian Capability Dropping

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE --read-only nginx:alpine

If you run this, your container cannot change file ownerships, it cannot kill processes outside its scope, and it cannot modify the network stack. It can only serve traffic.

Pro Tip: On CoolVDS instances, we recommend using the --read-only flag combined with tmpfs mounts for /tmp and /run. This prevents attackers from downloading exploits and making them executable on the disk. NVMe I/O is fast, but memory is faster.

3. Kubernetes SecurityContext: The YAML Wall

For those of you orchestrating on Kubernetes 1.18 or 1.19, the logic moves from the CLI to the deployment YAML. Managing SecurityContext is tedious, but necessary.

In a recent project migrating a financial services client to our Oslo datacenter, we enforced a strict policy: no container starts if it requires root. This broke half their Helm charts. We fixed them. Security is inconvenient by design.

Here is a production-ready securityContext block that passes our internal audits:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: secure-backend
  namespace: production
spec:
  template:
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
      - name: backend-api
        image: private-registry.coolvds.com/api:v2.4
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop:
              - ALL
            add:
              - NET_BIND_SERVICE
        volumeMounts:
        - name: tmp-volume
          mountPath: /tmp
      volumes:
      - name: tmp-volume
        emptyDir: {}

Note the allowPrivilegeEscalation: false. This prevents the child process from gaining more privileges than its parent, neutralizing setuid binaries.

4. Network Isolation and the "Noisy Neighbor" Myth

One of the persistent myths in hosting is that containers handle isolation perfectly. They don't. They share a kernel. If a neighbor container saturates the connection tracking table (conntrack), your packets get dropped.

This is why the underlying infrastructure matters. At CoolVDS, we use KVM (Kernel-based Virtual Machine) for our VPS instances. This provides a hardware virtualization gap. You run your Docker/K8s cluster inside a KVM slice. Your "noisy neighbor" is walled off by the hypervisor, not just a cgroup namespace.

Comparison: Isolation Layers

Feature Shared Hosting / OpenVZ CoolVDS (KVM + NVMe)
Kernel Access Shared with hundreds of users Dedicated Kernel per instance
Docker Security Risky (Kernel exploits affect host) High (Exploits contained to VM)
Schrems II Compliance Vague (Data mixing risks) Clear (Strict logical separation)

5. Auditing and Scanning

You cannot secure what you cannot see. In 2020, static analysis of container images is mandatory. We have started integrating Trivy into our CI/CD pipelines. It’s faster than Clair and easier to set up.

Code: Scanning an Image before Deploy

trivy image --severity HIGH,CRITICAL coolvds/internal-tool:latest

If this command returns a non-zero exit code, the pipeline fails. No exceptions. We recently caught a critical vulnerability in a base Node.js image that would have exposed environmental variables to arbitrary code execution.

6. The AppArmor Safety Net

For high-risk environments, standard DAC (Discretionary Access Control) permissions aren't enough. You need MAC (Mandatory Access Control). AppArmor profiles restrict what resources a container can access, even if it manages to trick the user permissions.

Here is a profile we use for Nginx containers that serve static assets. It explicitly denies write access to everything except the log directory.

#include 

profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
  #include 

  network inet tcp,
  network inet udp,
  network inet icmp,

  deny network raw,
  deny network packet,

  # Read-only system files
  /file r,
  /etc/nginx/** r,
  /usr/share/nginx/html/** r,

  # Write access logs only
  /var/log/nginx/* w,
  
  deny /bin/** x,
  deny /sbin/** x,
}

Loading this profile on the host (apparmor_parser -r -W /path/to/profile) and referencing it in your Docker run command (--security-opt apparmor=docker-nginx) adds a layer of defense that automated scripts usually cannot bypass.

7. Local Latency and Legal Reality

Why go through all this trouble? Because latency and law are the two constants of our job. Hosting in Norway (on CoolVDS) gives you millisecond access to NIX (Norwegian Internet Exchange) and compliance with strict Norwegian privacy laws.

However, physical security (our datacenters) and network security (our DDoS protection) mean nothing if your container exposes port 2375 (Docker socket) to the internet. We provide the fortress walls; you have to lock the internal doors.

Final Thoughts

Security is a trade-off between convenience and paranoia. In a post-Schrems II world, paranoia is just good business sense. By dropping capabilities, enforcing read-only filesystems, and isolating workloads on KVM-based CoolVDS instances, you build infrastructure that doesn't just survive the audit—it survives the internet.

Don't wait for a breach to learn these flags. Spin up a secure KVM instance on CoolVDS today, apply these configs, and sleep slightly better tonight.