You Are Probably Running Containers as Root, and It Terrifies Me
Let’s be honest. You pulled an image from Docker Hub, ran it, and it worked. You shipped it. Now it’s production. If I had a Krone for every time I’ve audited a Kubernetes cluster or a standalone Docker host in Oslo and found processes running as UID 0, I could retire to a cabin in Geilo tomorrow. Container adoption in 2019 has outpaced security understanding. We are building castles on sand.
Earlier this year, we saw CVE-2019-5736. This was a runC vulnerability that allowed a malicious container to overwrite the host runC binary and gain root execution on the host machine. If that didn't wake you up, nothing will. Containers are not Virtual Machines. They are isolated processes. The isolation is thin. If you don't thicken that wall, you are exposing your entire infrastructure.
This guide isn't about theoretical compliance. It is about how to configure your systems right now, in December 2019, so you don't end up explaining a data breach to Datatilsynet next year.
1. The "Root" of All Evil
By default, processes inside a Docker container run as root. If an attacker compromises your application (say, via a vulnerable Struts component), they are root inside the container. If they break out—like with the runC exploit mentioned above—they are root on your server.
Fixing this is trivial, yet often ignored. Create a user. Use it.
# The Wrong Way
FROM node:10
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
# The Right Way
FROM node:10-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY . .
# Change ownership
RUN chown -R appuser:appgroup /app
# Switch context
USER appuser
CMD ["node", "index.js"]
This simple change in your Dockerfile mitigates a massive class of privilege escalation attacks. Do not wait for Kubernetes PodSecurityPolicies (PSP) to force your hand. Do it at the image build level.
2. Drop Linux Capabilities
The Linux kernel divides root privileges into distinct units called capabilities. Does your web server need to change the system time? No. Does it need to load kernel modules? Absolutely not. Yet, by default, Docker grants a wide array of these capabilities.
Adhere to the principle of least privilege. Drop everything, then add back only what is strictly necessary. For a typical Nginx or Node app, you barely need anything.
docker run --d \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--read-only \
--tmpfs /tmp \
my-secure-app:latest
Note the --read-only flag. This makes the container's root filesystem immutable. If an attacker gets a shell, they can't install a crypto miner or a backdoor script because they cannot write to the disk. We map a tmpfs for temporary files that vanish on restart.
Pro Tip: Use --security-opt=no-new-privileges. This kernel feature prevents processes from gaining more privileges (e.g., via setuid binaries) during execution. It is a low-overhead switch that breaks many exploit chains.
3. The Infrastructure Layer: Why KVM is Non-Negotiable
This is where many developers get burned by "cheap" VPS hosting. There are two main types of virtualization you'll find in the Nordic market: Container-based (like OpenVZ or LXC) and Hypervisor-based (like KVM or Xen).
If you run Docker inside an OpenVZ VPS, you are essentially running containers inside a container. You share the kernel not just with your own processes, but with other customers on the same physical node. If a kernel panic occurs, or a kernel exploit is triggered, the blast radius is the entire node.
This is why CoolVDS exclusively uses KVM (Kernel-based Virtual Machine).
With KVM, your VPS has its own dedicated kernel. It is hardware virtualization. Even if an attacker escapes your Docker container, they are trapped inside your KVM instance. They cannot touch the physical host or other customers. For businesses in Norway dealing with sensitive data, relying on shared-kernel hosting is a compliance risk you cannot afford.
4. Network Segmentation and Local Latency
Don't bind ports to 0.0.0.0 unless you mean it. If you have a database container that only your web container needs to talk to, use a Docker user-defined bridge network. Do not expose port 3306 to the public internet.
version: '3.7'
services:
db:
image: postgres:12-alpine
networks:
- backend
# No ports section here. It is not exposed to the host.
environment:
POSTGRES_PASSWORD: ${DB_PASS}
web:
image: my-app:v1
ports:
- "127.0.0.1:8080:8080" # Bind to localhost only if using a reverse proxy on host
networks:
- backend
networks:
backend:
driver: bridge
Furthermore, consider where your images are stored and running. Latency affects security operations too. When you are pushing gigabytes of image layers or streaming logs to an ELK stack, network throughput matters.
Our CoolVDS infrastructure in Oslo is peered directly with major Nordic ISPs (Telenor, Telia). Low latency means your CI/CD pipelines finish faster, and your security scanners (like Clair or Anchore) don't time out pulling layers. We use NVMe storage which significantly reduces I/O wait times during heavy image builds.
5. The GDPR Angle
We are still navigating the post-GDPR landscape. While the data processing agreement (DPA) is legal paperwork, the technical implementation is on you. If your container logs contain PII (Personally Identifiable Information) and you are shipping those logs to a third-party logging service hosted outside the EU/EEA, you have a problem.
Hosting on a Norwegian VPS gives you data sovereignty. You keep the logs local. You keep the database volumes local. Datatilsynet (The Norwegian Data Protection Authority) requires that you have control over your data flow. Running secure containers on sovereign infrastructure is the easiest way to demonstrate that control.
Summary Checklist for 2019
| Security Layer | Action Item | Tool/Flag |
|---|---|---|
| Image | Don't use root user | USER <uid> |
| Runtime | Limit capabilities | --cap-drop=ALL |
| Filesystem | Prevent writing to root | --read-only |
| Infrastructure | Hardware Isolation | CoolVDS KVM |
Security is not a product you buy; it is a process you follow. But having the right foundation makes the process possible. You can't secure a container effectively if the underlying server is sluggish or insecure by design.
Ready to lock down your infrastructure? Deploy a hardened KVM instance on CoolVDS today. Our NVMe-backed storage is ready for your heavy container workloads, and our Oslo datacenter ensures your data stays exactly where it belongs.