Stop Treating Containers Like Light VMs: A 2020 Survival Guide
It is December 2020. If you are reading this, you survived the transition to remote work, but your infrastructure might not have. The recent SolarWinds supply chain attack has sent a clear message to every sysadmin in Oslo and beyond: your perimeter is gone. We are seeing a massive uptick in attacks targeting the container runtime itself, exploiting the fact that too many developers treat Docker containers as "lightweight Virtual Machines." They are not.
I’ve spent the last six months cleaning up messes where a "secure" cluster was compromised because someone mounted /var/run/docker.sock into a CI runner without thinking. In Norway, where the Datatilsynet (Data Protection Authority) is rightfully aggressive about GDPR compliance following the Schrems II ruling this July, a breach isn't just an outage; it's a legal catastrophe.
Here is how we lock down container workloads on Linux, moving beyond the basics and into kernel-level defense.
1. The Root Problem (Literally)
By default, a process inside a Docker container runs as PID 1 with UID 0 (root). While namespaces provide isolation, a root user inside a container is effectively root on the host if they break out of that namespace—which happens more often than the marketing brochures admit. If you are running Node.js or Python apps as root, stop.
The Fix: Create a dedicated user in your Dockerfile. This is non-negotiable for production.
# The Wrong Way
FROM node:14-alpine
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
# The Right Way (2020 Standard)
FROM node:14-alpine
RUN addgroup -g 1001 -S nodejs && \
adduser -u 1001 -S nodejs -G nodejs
WORKDIR /app
COPY --chown=nodejs:nodejs . .
USER nodejs
CMD ["node", "index.js"]
When you deploy this, even if an attacker achieves Remote Code Execution (RCE) via a vulnerable dependency, they land in a shell with restricted permissions. They cannot apt-get install tools, they cannot modify system files, and they certainly cannot mount host filesystems.
2. Dropping Linux Capabilities
The Linux kernel divides root privileges into distinct units called capabilities. By default, Docker grants a container roughly 14 capabilities, including CHOWN, NET_RAW, and SETUID. Most web applications need exactly zero of these.
If your Nginx ingress gets popped, does it need the ability to manipulate network packets (NET_RAW)? No. That is how ARP spoofing attacks happen inside clusters.
Pro Tip: Start by dropping ALL capabilities and adding back only what is strictly necessary. This follows the Principle of Least Privilege.
Here is how you run a hardened container via CLI:
docker run --d -p 80:80 \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--read-only \
--tmpfs /tmp \
nginx:1.19-alpine
And for the Kubernetes users panicking about the dockershim deprecation announced in v1.20—don't worry, standard Docker images still work. But you need to update your Pod specifications to enforcing security contexts immediately.
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
containers:
- name: main
image: my-app:1.0.4
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
3. The Dirty Secret of Shared Kernels
Here is the reality check that hurts: Containers share the host kernel. If you encounter a kernel panic inside a container, or a vulnerability like the dirty_cow exploit (old but the concept remains), the entire host is at risk.
This is where infrastructure choice becomes a security decision. Many "Cloud" providers pack thousands of containers onto a single bare-metal OS. This is the "Noisy Neighbor" problem on steroids—it's a "Dangerous Neighbor" problem.
Why Virtualization Still Matters
For mission-critical workloads, especially those handling Norwegian citizen data (Fødselsnummer), relying solely on cgroups for isolation is risky. You want a hypervisor between you and the metal.
At CoolVDS, we don't oversell our nodes. We use KVM (Kernel-based Virtual Machine) to ensure that your slice of the server has its own kernel. If a neighbor on the physical host crashes their TCP stack, your VDS keeps humming. This hardware-level isolation is vital for complying with stricter interpretations of GDPR/Schrems II, ensuring data leakage between tenants is physically impossible.
| Feature | Standard VPS/Container | CoolVDS (KVM) |
|---|---|---|
| Kernel Isolation | Shared (Risk of escape) | Dedicated (Hard boundary) |
| IO Performance | Often throttled/noisy | Dedicated NVMe Lanes |
| Compliance | Vague Data Residency | Oslo-based, strict Norwegian Law |
4. Supply Chain: Scan It Before You Ship It
We used to trust docker pull implicitly. We can't do that anymore. Vulnerabilities in base images are rampant. Before any image hits your production environment, it needs to be scanned against the CVE database.
In late 2020, Trivy by Aqua Security has emerged as the fastest, easiest scanner to integrate into CI/CD pipelines. It is easier to set up than Clair and catches OS-level issues instantly.
# Install Trivy (v0.15.0)
apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | tee -a /etc/apt/sources.list.d/trivy.list
apt-get update
apt-get install trivy
# Scan your image
trivy image --severity HIGH,CRITICAL coolvds/internal-api:latest
If you see a Critical CVE, the build fails. Simple. Do not deploy known vulnerabilities.
5. Network Policies: The Firewall Inside
By default, Kubernetes allows all pods to talk to all other pods. This is great for development and terrible for security. If an attacker breaches your frontend, they can scan your database directly.
You must implement NetworkPolicies. This restricts traffic flow at the IP level within the cluster.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Apply a "Deny All" policy first, then whitelist specific routes. It increases setup time, but it stops lateral movement dead in its tracks.
Final Thoughts: Latency and Sovereignty
Security is not just about hackers; it is about reliability and law. Hosting your containers on a generic US cloud provider might seem easy, but the latency to the Norwegian Internet Exchange (NIX) can kill your application's responsiveness. Furthermore, the legal landscape in Europe changed fundamentally this year.
You need low latency and high compliance. CoolVDS offers NVMe-backed storage that makes container I/O operations—like building images or database transactions—blazing fast, all while keeping your data physically located in Norway.
Don't wait for the next zero-day to harden your stack. Audit your Dockerfiles today.
Ready to run your secure workloads on iron you can trust? Spin up a high-performance KVM instance on CoolVDS in under 55 seconds.