Console Login

Your Containers Are Leaking: A Field Guide to Hardening Docker & K8s in 2024

Let’s be honest: docker run is a vulnerability waiting to happen. I’ve watched seasoned engineers deploy containers with default settings, assuming the isolation magic of namespaces and cgroups will save them. It won't. If you are running a container as root, you are essentially handing the keys to your host kernel to anyone who can break out of that runtime. And in 2024, container breakouts aren't just theoretical research papers; they are automated scripts running on botnets looking for exposed APIs.

I remember a specific incident last winter involving a logistics firm in Drammen. They were running a standard ELK stack on a shared hosting provider (not us). A vulnerability in Logstash allowed remote code execution. Because the container process was running as root and the filesystem was writable, the attacker pulled a crypto-miner binary, masked it as a system process, and saturated the CPU. The hosting provider suspended them. The business lost 14 hours of data processing.

If they had applied three lines of configuration changes, the attack would have failed. Here is how you lock down your infrastructure before you become the subject of a Datatilsynet inquiry.

1. The Root Problem (Literally)

By default, a process inside a Docker container runs as PID 1 with UID 0 (root). This is convenient for installation but catastrophic for security. If an attacker compromises your application, they have root privileges inside the container. If they find a kernel exploit, they have root on your node.

The Fix: Create a specific user in your Dockerfile. Never let the runtime default to root.

FROM alpine:3.20

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Install dependencies before switching users
RUN apk --no-cache add curl

# Switch to non-root user
USER appuser

# Now everything runs as appuser
ENTRYPOINT ["./my-app"]

When you deploy this on CoolVDS, our underlying KVM virtualization adds a hard hardware-assisted layer of isolation, but you should still practice defense-in-depth. Don't rely solely on the hypervisor.

2. Immutable Infrastructure: Read-Only Filesystems

Why does your web application need write access to /usr/bin? It doesn't. Mutability is where persistence lives. If an attacker cannot write a backdoor to the disk, they have to run it entirely in memory, which is harder to sustain and easier to wipe.

In Kubernetes, force this via the securityContext. This is non-negotiable for high-security environments, especially if you are handling sensitive Norwegian consumer data under GDPR.

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
spec:
  containers:
  - name: my-app
    image: my-registry/app:v1.2
    securityContext:
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 1001
    volumeMounts:
    - mountPath: /tmp
      name: tmp-volume
  volumes:
  - name: tmp-volume
    emptyDir: {}
Pro Tip: Most apps will crash with a read-only filesystem because they try to write logs or temp files. Mount an emptyDir volume to /tmp or /var/log (as shown above) to give them a scratchpad that disappears when the pod dies.

3. Drop Linux Capabilities

Linux "root" is actually a collection of capabilities (CAP_CHOWN, CAP_NET_ADMIN, etc.). Most web servers only need to bind to a port and write to a socket. They do not need to load kernel modules or change the system time.

The smartest move is to drop ALL capabilities and then add back only what is strictly necessary. This follows the Principle of Least Privilege.

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx:alpine

Or in your Kubernetes manifest:

securityContext:
  capabilities:
    drop:
      - ALL
    add:
      - NET_BIND_SERVICE

4. Supply Chain Security: Trust Nothing

After the XZ Utils backdoor earlier this year, trusting upstream repositories blindly is negligence. You need to scan images before they ever hit your production cluster. We use Trivy in our CI/CD pipelines.

Here is a quick check you can run on your local machine right now:

# Install Trivy (if you haven't already)
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v0.50.0

# Scan an image
trivy image python:3.9-slim

You will likely see a list of "High" and "Critical" vulnerabilities. If you see unpatched vulnerabilities in the base OS, switch to a distroless image or a minimal Alpine version (checking compatibility with musl libc).

5. The Infrastructure Layer: Why "VPS Norway" Matters

You can configure the tightest SELinux policies and the most restrictive NetworkPolicies, but if the underlying server is suffering from "noisy neighbor" syndrome or weak hypervisor isolation, your stability is compromised.

This is where the choice of hosting becomes architectural, not just financial. In the Norwegian market, latency to the NIX (Norwegian Internet Exchange) is crucial for real-time applications.

Comparison: Standard Container Hosting vs. CoolVDS

Feature Generic VPS / Shared Container Host CoolVDS KVM Instances
Isolation Shared Kernel (Soft isolation via cgroups) Hardware Virtualization (KVM)
Disk I/O Often SATA SSD (shared heavily) Dedicated NVMe
Data Residency Often routed through Frankfurt/Amsterdam Local Norway Data Centers
DDoS Protection Basic L3/L4 Advanced L7 Mitigation

When we built the CoolVDS platform, we chose KVM specifically because it treats your container host as a completely separate machine. Your kernel is your kernel. This prevents container escape vulnerabilities from affecting other tenants, a massive compliance win for GDPR and Schrems II requirements.

6. Network Policies: Zero Trust Inside the Cluster

By default, all pods in a Kubernetes cluster can talk to all other pods. If an attacker breaches your frontend, they can port-scan your database directly. Shut that down.

Start with a "Deny All" policy and whitelist traffic explicitly.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

This will break everything immediately—which is good. It forces you to understand exactly what traffic flows your application actually requires. Enable flows one by one.

Final Thoughts

Container security in 2024 requires a shift in mindset. You cannot rely on the perimeter firewall to save you. The threat is often inside the supply chain or the runtime configuration.

Hardening your containers reduces your attack surface, but running them on robust, high-performance infrastructure ensures that when you do face load or an attack, your systems stay standing. Whether you are running a high-traffic Magento store or a critical microservices backend, the foundation matters.

Ready to test your hardened stack on low-latency, NVMe-powered infrastructure? Deploy a KVM instance on CoolVDS in Oslo today and see the difference dedicated resources make.