Stop Trusting Defaults: A Survival Guide for Container Security in 2022
I still see it in production logs every week. USER root. It sits there in Dockerfiles like an unexploded ordinance waiting for a zero-day in a shared library to trigger it. If you are deploying containers in Norway, you aren't just fighting hackers; you are fighting the strict regulatory hammer of Datatilsynet. After the Schrems II ruling, relying on US-based cloud defaults isn't just lazy; it's a liability.
Let's cut the marketing noise. Containers are processes. They share a kernel. If that kernel is compromised, your isolation is gone. I've spent the last decade debugging distributed systems, and I can tell you: the illusion of security is more dangerous than no security at all. Here is how we lock down infrastructure effectively, using tools available right now in 2022.
1. The Base Image is Your First Vulnerability
Stop using FROM ubuntu:latest. It is bloated, it has a massive attack surface, and it carries binaries you don't need. Every shell utility included in your image is a tool an attacker can use for lateral movement if they breach your application.
The Fix: Use Distroless or Alpine, but verify the checksums. Distroless images contain only your application and its runtime dependencies. No shell. No package manager. If an attacker gets in, they can't even run ls.
Code Example: Multi-stage Build for Minimalism
# Build Stage
FROM golang:1.17-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o main .
# Production Stage
FROM gcr.io/distroless/static:nonroot
COPY --from=builder /app/main /
USER 65532:65532
ENTRYPOINT ["/main"]
By switching to a non-root user (UID 65532 here), we mitigate a huge class of privilege escalation attacks.
2. Runtime Security: Drop Those Capabilities
By default, Docker grants a container specific Linux capabilities like CHOWN, NET_RAW, and SETUID. Most web applications need exactly zero of these. If your Node.js API is trying to modify network interfaces, you have been pwned.
The Fix: Drop all capabilities and add back only what is strictly necessary. We also enforce a read-only root filesystem to prevent attackers from writing malicious scripts to disk.
In a standard docker run command, it looks like this:
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE --read-only --tmpfs /tmp my-secure-app
But since we are orchestrating, here is how you define this in a Kubernetes v1.23 Pod manifest. This is the difference between a minor incident and a full breach.
Code Example: Kubernetes Security Context
apiVersion: v1
kind: Pod
metadata:
name: secured-nginx
spec:
containers:
- name: nginx
image: nginx:1.21-alpine
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
runAsGroup: 3000
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
volumeMounts:
- mountPath: /var/cache/nginx
name: cache-volume
- mountPath: /var/run
name: run-volume
volumes:
- name: cache-volume
emptyDir: {}
- name: run-volume
emptyDir: {}
Pro Tip: When using read-only filesystems, Nginx and other daemons will crash because they can't write PID files or logs. You must mountemptyDirvolumes to/var/runand/var/cacheas shown above. It’s annoying, but it’s secure.
3. Network Segmentation (The "Zero Trust" Reality)
If you have a frontend container that can talk to your database, and a Redis cache that can talk to the database, and a logging sidecar that can talk to... everything? You have a flat network. If one container falls, they all fall.
In Norway, data minimization is a legal requirement. Network policies restrict traffic flow at the IP level within the cluster. Deny everything by default.
Code Example: Default Deny Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This policy kills all traffic. You then whitelist specifically what is allowed. For example, allowing only the frontend to hit the backend on port 8080.
4. The Infrastructure Layer: Where CoolVDS Matters
This is the part many developers overlook. You can harden your Dockerfile all day, but if the host kernel is shared in a weak virtualization environment (like standard OpenVZ or LXC), a kernel panic in a neighbor's container can impact you. Or worse, a dirty COW exploit could theoretically breach the host.
At CoolVDS, we don't play games with "container-native" hosting that is just a shared kernel with fancy marketing. We provide KVM (Kernel-based Virtual Machine) instances. Each VPS has its own dedicated kernel. This provides a hard hardware-level isolation boundary.
Why does this matter for a Norwegian CTO?
- GDPR Compliance: Data leakage between tenants is virtually impossible with KVM hardware virtualization.
- Noisy Neighbors: Your IOPS are yours. With our NVMe storage, you get consistent throughput, essential for databases that can't tolerate latency spikes.
- Local Peering: Our infrastructure peers directly at NIX (Norwegian Internet Exchange). Your traffic stays local, lowering latency to Oslo to sub-millisecond levels.
5. Supply Chain Security
In 2022, scanning your images is mandatory. Tools like Trivy or Clair should be blocking your CI/CD pipeline if they find high-severity CVEs.
Here is a snippet for a GitLab CI pipeline (which many of our customers use). It fails the build if a 'Critical' vulnerability is found.
Code Example: Trivy Scan in CI
container_scanning:
image:
name: aquasec/trivy:0.24.0
entrypoint: [""]
script:
# Build the image as a file first
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
# Fail on Critical, but display all High vulnerabilities
- trivy image --exit-code 1 --severity CRITICAL --no-progress $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- trivy image --exit-code 0 --severity HIGH --no-progress $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
Conclusion: Performance vs. Paranoia
Security is always a trade-off. Running a container as a non-root user on a read-only filesystem is harder. It breaks things. It requires debugging.
But consider the alternative. A breach in 2022 involves ransomware, data exfiltration, and a very public apology. With the current geopolitical instability in Europe, the attacks are getting automated and aggressive.
Start with a solid foundation. Use hardware-isolated KVM instances from a provider that understands the local landscape.
Ready to lock it down? Deploy a hardened KVM instance on CoolVDS today. NVMe-powered, Norway-located, and ready for production.