Container Security in 2021: Stop Deploying Root Vulnerabilities to Production
Let’s be honest: docker run is the new chmod 777. It works, it's fast, and in a production environment, it is absolutely terrifying. After the SolarWinds supply chain attack late last year and the recent Codecov bash uploader breach in April, the illusion of the "isolated container" has been shattered for anyone paying attention. If you are deploying containers in 2021 with default settings, you aren't doing DevOps; you're just hosting a playground for privilege escalation.
I've spent the last month auditing Kubernetes clusters for a fintech client in Oslo. The number of pods running as root with full capability sets was enough to keep me awake longer than the midnight sun. Efficiency is great, but if your container runtime is the weak link, your fancy NVMe storage and low latency won't save you from a crypto-mining hijacker.
This guide isn't about theoretical compliance. It’s about the raw technical configurations you need to apply today to survive the hostile internet. We will cover image hardening, runtime defense, and why the underlying hardware virtualization—specifically KVM—is your last line of defense.
1. The Root Addiction: Just Say No
By default, processes inside a Docker container run as root. If an attacker exploits a vulnerability in your application (say, a Node.js prototype pollution bug) and breaks out of the container to the host, they are root on the host. Game over.
You must enforce non-root users at the image level. It’s a three-line fix that 90% of developers skip.
The Fix: explicit UID/GID
Do not rely on the base image's user. Create a specific user with a known UID.
FROM alpine:3.14
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Tell Docker to switch to this user
USER appuser
WORKDIR /home/appuser
COPY --chown=appuser:appgroup . .
CMD ["./my-secure-app"]
Pro Tip: In Kubernetes, enforce this at the cluster level using PodSecurityPolicies (PSP) or, since PSPs are being deprecated in version 1.21, start looking at OPA Gatekeeper. If a pod requests to run as root (UID 0), the admission controller should reject it immediately.
2. Minimal Base Images: Less Code, Less CVEs
I still see people using FROM ubuntu:20.04 for a simple Go binary. You are shipping a full OS distro with package managers, shells, and libraries you don't need. Every extra binary is a gadget for an attacker to use during lateral movement.
In 2021, the standard is Distroless (by Google) or strictly version-pinned Alpine. Distroless images contain only your application and its runtime dependencies. No shell, no package manager.
Comparison: Attack Surface
| Base Image | Size (approx) | Shell Available? | Risk Profile |
|---|---|---|---|
| Ubuntu 20.04 | 70 MB+ | Yes (bash/sh) | High (Full toolset) |
| Alpine 3.14 | 5 MB | Yes (ash) | Medium |
| gcr.io/distroless/static | 2 MB | No | Lowest |
3. Supply Chain Security: Trust Nothing
Pulling node:latest is a gamble. You don't know when the image changed or what's inside it. You need to scan images for CVEs inside your CI/CD pipeline before they ever touch your staging environment.
We use Trivy because it's fast, integrates easily with GitHub Actions/GitLab CI, and maintains an up-to-date vulnerability database.
# Install Trivy (v0.18.3 - current stable)
apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | tee -a /etc/apt/sources.list.d/trivy.list
apt-get update
apt-get install trivy
# Scan your image, failing on critical severity
trivy image --exit-code 1 --severity CRITICAL myapp:v1.0.0
If this command returns exit code 1, the build fails. No manual overrides. This is how you prevent a "patch Tuesday" nightmare from becoming a Friday night incident.
4. Runtime Defense: Read-Only Filesystems
Once a container is running, it should be immutable. If an attacker manages to exploit a Remote Code Execution (RCE) vulnerability, their first step is usually to download a payload or modify a configuration file. Make that impossible.
Mount the root filesystem as read-only. If your app needs to write temp files (logs, cache), mount a specific tmpfs volume for that path only.
Docker Compose Example:
version: '3.8'
services:
web:
image: nginx:1.21-alpine
read_only: true
tmpfs:
- /var/cache/nginx
- /var/run
volumes:
# Mount config as read-only
- ./nginx.conf:/etc/nginx/nginx.conf:ro
security_opt:
- no-new-privileges:true
5. The Infrastructure Layer: Why Virtualization Matters
Here is the uncomfortable truth about container security: Containers share the host kernel.
If you are running containers on a cheap, oversold VPS provider using OpenVZ or LXC, you are not just sharing the CPU; you are sharing the kernel with every other customer on that physical node. A kernel panic triggered by a neighbor brings you down. A kernel exploit (like 'Dirty Cow' from a few years back) could allow a neighbor to inspect your memory.
This is where CoolVDS takes a hard stance. We exclusively use KVM (Kernel-based Virtual Machine) virtualization.
With KVM, your VPS has its own isolated kernel. Even if you run a container that gets compromised and breaks out to the host OS, the attacker is trapped inside your Virtual Machine. They cannot touch the hypervisor or other clients. For any production workload processing sensitive data—especially under GDPR mandates—hardware-level virtualization isn't a luxury; it's a requirement.
6. The Norwegian Context: Data Sovereignty & Latency
Since the Schrems II ruling in July 2020, the legality of transferring personal data to US-owned cloud providers has been shaky at best. The Norwegian Data Protection Authority (Datatilsynet) has made it clear that reliance on standard contractual clauses isn't a "get out of jail free" card anymore.
Hosting on CoolVDS ensures your data stays physically in Norway (or within the EEA) on infrastructure owned by a European entity. Beyond compliance, there is the physics of performance. If your user base is in Scandinavia, why route packets through Frankfurt? Our latency to NIX (Norwegian Internet Exchange) is negligible. We combine high-performance NVMe storage with local peering to ensure that your secured, hardened containers serve traffic faster than your competitors.
Final Thoughts
Security is a process, not a product. Start by dropping root privileges today. Integrate Trivy into your CI pipeline tomorrow. And when you are ready to deploy, ensure your foundation is solid.
Don't let a shared kernel be your single point of failure. Deploy your hardened containers on a CoolVDS KVM instance and get the isolation your architecture demands.