You wouldn't leave your front door open in Oslo, so why run Docker as root?
It has been almost a year since Log4Shell set half the internet on fire, and yet, in audits I conducted last month, I still see production containers running with full root privileges. In the fast-paced DevOps environment, "it works" often trumps "it's secure." But here in the Nordics, where the Datatilsynet (Norwegian Data Protection Authority) is rightfully aggressive about GDPR enforcement, a container breakout isn't just a technical failure; it's a massive legal liability.
If you are deploying microservices on a generic VPS provider without considering the layers beneath, you are building on sand. This guide cuts through the marketing fluff and focuses on the raw configuration changes you need to make today to secure your infrastructure.
1. The "Root" of All Evil
By default, a process inside a Docker container runs as root. If an attacker compromises that process (via an RCE like the aforementioned Log4j), they are root inside the container. If they manage a container breakout exploit (like CVE-2022-0492, which made headlines earlier this year), they could potentially gain root access to the host node.
The fix is boring but mandatory: Drop privileges.
# WRONG
FROM node:16
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
# RIGHT
FROM node:16-alpine
WORKDIR /app
COPY . .
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Change ownership
RUN chown -R appuser:appgroup /app
# Switch user
USER appuser
CMD ["node", "index.js"]
Pro Tip: In Kubernetes 1.24+ (which you should be upgrading to if you haven't yet),PodSecurityPolicyis deprecated. You need to start using Pod Security Standards enforced via the Admission Controller. SetrunAsNonRoot: truein your security context immediately.
2. Shift Left: Scanning Before You Push
I've seen pipelines where the CI/CD builds an image, pushes it to the registry, and deploys it to production in 4 minutes. Fast? Yes. Suicidal? Also yes. You cannot rely on upstream images being clean.
In 2022, Trivy by Aqua Security has become the de-facto standard for this because it's fast and easy to pipeline. Unlike older heavy tools, Trivy is a single binary.
Here is how we integrate it into a standard bash pipeline script:
#!/bin/bash
IMAGE_NAME="eu.gcr.io/my-org/my-app:v1.0.2"
# Build
docker build -t $IMAGE_NAME .
# Scan - Fail the build if Critical vulns are found
trivy image --exit-code 1 --severity CRITICAL --light $IMAGE_NAME
# If exit code is 0, push
if [ $? -eq 0 ]; then
docker push $IMAGE_NAME
else
echo "CRITICAL VULNERABILITY DETECTED. BUILD ABORTED."
exit 1
fi
Running this requires low-latency disk I/O because Trivy needs to download vulnerability DBs and unzip image layers rapidly. On CoolVDS, we utilize strictly local NVMe storage rather than network-attached storage (CEPH/GlusterFS) for our standard instances. This difference cuts scan times from ~45 seconds to ~8 seconds in our benchmarks.
3. Network Segmentation: The Zero Trust Model
By default, all pods in a Kubernetes cluster can talk to each other. Your frontend web server can ping your Redis cache, which is fine, but it can also probe your payment processor service. That is not fine.
If you aren't using NetworkPolicies, you rely on the attacker not knowing the internal IP addressing. Security through obscurity is dead. Lock it down using a CNI that supports policies (like Calico or Cilium).
Example: Deny All Ingress (Default Deny)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
Once this is applied, nothing talks to anything unless you explicitly allow it.
4. The Host Layer: Isolation Matters
This is where the "Pragmatic CTO" mindset kicks in. You can have the most secure container config in the world, but if your neighbor on the physical server is noisy or malicious, you have a problem. Containers share the host kernel. If the kernel crashes, everyone goes down.
In the context of Schrems II and GDPR, knowing exactly where your data lives and who controls the hardware is vital. Many budget providers oversell resources using OpenVZ or LXC, meaning "your" RAM is actually burstable shared RAM. This introduces side-channel attack vectors.
| Feature | Shared Container (LXC/OpenVZ) | CoolVDS (KVM) |
|---|---|---|
| Kernel Isolation | Shared (Weak) | Dedicated (Strong) |
| Resource Guarantee | Burstable/Oversold | Reserved RAM/CPU |
| Custom Kernel Modules | No | Yes (WireGuard, etc.) |
At CoolVDS, we use KVM (Kernel-based Virtual Machine) exclusively. Even if you are running Docker inside our VPS, that VPS is hardware-isolated. This extra layer is critical for compliance with Norwegian financial and health data regulations.
5. Runtime Security with Falco
Static analysis (Trivy) catches bad code. Runtime security catches bad behavior. Falco is the security camera for your cluster. It listens to the Linux kernel syscalls.
If a container suddenly spawns a shell or tries to read /etc/shadow, Falco screams. Installing it on a Debian/Ubuntu based system is straightforward in 2022:
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get install -y linux-headers-$(uname -r) falco
Once running, a rule like this detects an interactive shell opening in a container:
- rule: Terminal shell in container
desc: A shell was used as the entrypoint for a container.
condition: >
spawned_process and container
and shell_procs and proc.tty != 0
and container_entrypoint
output: "Shell spawned in a container (user=%user.name container_id=%container.id image=%container.image.repository)"
priority: WARNING
Conclusion: Architecture is Security
Security isn't a patch you apply on Friday afternoon. It's architectural. It requires immutable infrastructure, strict network policies, and a hosting partner that understands the difference between "cheap" and "value."
When you host with CoolVDS, you aren't just getting an IP address. You get a KVM-isolated environment sitting on enterprise NVMe, hosted right here in compliant data centers. We handle the hardware hardening so you can focus on your Dockerfile.
Ready to harden your stack? Deploy a secure KVM instance in Oslo today—latency under 10ms for local traffic.