Hardening Containers in 2022: A Battle-Hardened Guide to Kubernetes & Docker Security
If you are running privileged: true in your production manifests, you aren't deploying applications; you are handing out root access to your host infrastructure with extra steps. We have all been there, scrambling at 3:00 AM because a default configuration left a Redis port exposed or an unpatched Alpine image allowed a crypto-miner to eat up the CPU cycles we promised our clients. In the wake of the Log4j vulnerability that shook the industry late last year, the "move fast and break things" mantra has officially been retired in favor of "move fast and verify everything," especially here in Europe where the regulatory gaze of the GDPR and agencies like Datatilsynet in Norway is sharper than ever. Container isolation is not magic; it is merely a set of Linux namespaces and cgroups, and if you treat them like impenetrable force fields without additional hardening, you are building a castle on a foundation of sand. Real security requires a defense-in-depth strategy that starts at the base image, tightens the runtime constraints, and crucially, relies on the underlying virtualization technology to provide the final, unbreakable backstop against kernel-level exploits.
1. The Supply Chain: Trust Nothing, Verify Everything
The security of your cluster is decided before a single pod is scheduled. Most developers pull node:latest or python:3.9 and call it a day, oblivious to the hundreds of system vulnerabilities lurking in those bloated base images. In 2022, sticking to full-fat OS images for production workloads is negligence. You should be utilizing minimal images like Alpine Linux or, even better, Google's Distroless images which strip away everything strictly not required for the application to run—including shells and package managers—making it exponentially harder for an attacker to move laterally if they do manage to gain entry. Furthermore, with the rise of software supply chain attacks, implementing image scanning in your CI/CD pipeline is no longer optional; tools like Trivy or Clair need to gate every deployment, failing the build if high-severity CVEs are detected. Below is an example of a multi-stage build that prioritizes a minimal footprint, a pattern that should be the absolute standard for any serious engineering team operating today.
Optimized Multi-Stage Dockerfile
# Stage 1: The Builder
FROM golang:1.19-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build a static binary with no external dependencies
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o secure-app .
# Stage 2: The Runtime
# Using distroless specifically to remove shell access
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /app/secure-app .
USER 65532:65532
ENTRYPOINT ["/secure-app"]
2. Runtime Hardening: Restricting the Blast Radius
Once the container is running, your goal is to limit what it can do when—not if—it gets compromised. The Linux kernel capabilities system allows us to grant granular permissions, yet default Docker settings grant far too many. Does your Nginx ingress controller really need CAP_NET_ADMIN? Absolutely not. You should be dropping all capabilities by default and only adding back the specific ones required for operation, effectively neutering a potential root escalation exploit. Additionally, enforcing a read-only root filesystem prevents attackers from downloading malicious scripts or modifying binaries at runtime, a technique that would have mitigated countless attacks we have seen across European data centers this year. When deploying to Kubernetes, the Pod Security Admission (which graduated to Beta in v1.23 and is now stable in v1.25) is the successor to Pod Security Policies, and you need to be familiar with configuring your securityContext correctly. Here is how a locked-down deployment manifest should look in a modern Kubernetes environment.
Kubernetes Security Context Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-microservice
labels:
app: secure-app
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
containers:
- name: main-app
image: my-registry/secure-app:v1.2.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /tmp
name: tmp-volume
volumes:
- name: tmp-volume
emptyDir: {}
Pro Tip: Never rely on the default service account for your pods. If an attacker compromises a pod, the mounted service account token is often their key to the Kubernetes API server. Always disable automounting of the service account token (`automountServiceAccountToken: false`) unless the application specifically needs to talk to the K8s API.
3. The Host Layer: Why Your Virtualization Matters
We often forget that containers are just processes sharing a kernel. If that kernel panics or is exploited via a vulnerability like Dirty Pipe (CVE-2022-0847), every container on that host is at risk. This is where the choice of your infrastructure provider becomes a critical security decision rather than just a budget line item. Many budget providers pack thousands of users onto OpenVZ or LXC containers where kernel isolation is weak or non-existent; in those environments, a "noisy neighbor" isn't just an annoyance, it's a security vector. For mission-critical workloads, you must demand KVM (Kernel-based Virtual Machine) virtualization. KVM provides hardware-level virtualization, ensuring that your OS kernel is distinct and isolated from the host node and other tenants. At CoolVDS, we exclusively utilize KVM for our VPS Norway instances because we understand that true multi-tenancy security requires physical separation logic, not just software namespaces. When you combine KVM isolation with high-speed NVMe storage, you aren't just getting low latency; you are getting a predictable, secure environment where your resources are contractually yours.
4. Local Compliance: The Norwegian Advantage
Since the Schrems II ruling invalidated the Privacy Shield, transferring personal data outside the EEA has become a legal minefield for DevOps teams and CTOs alike. Hosting your container infrastructure on US-controlled clouds introduces complexity regarding data access requests under the CLOUD Act. By centralizing your infrastructure in Norway, you benefit from some of the strictest data privacy laws in the world and a power grid running on nearly 100% renewable hydroelectric energy—a factor becoming increasingly relevant for ESG reporting in 2022. Latency also plays a huge role; if your primary market is Scandinavia, serving traffic from Oslo rather than Frankfurt or Amsterdam can shave 15-20ms off your round-trip time. In high-frequency trading or real-time gaming applications, that reduction is the difference between a seamless experience and user churn. We built our data centers in Oslo specifically to address this intersection of performance and compliance, offering a safe harbor for data that needs to stay within Norwegian jurisdiction.
5. Implementing Network Policies
By default, Kubernetes allows all pods to talk to all other pods. This flat network topology is a hacker's dream, allowing unrestricted lateral movement once a breach occurs. You must implement NetworkPolicies to whitelist traffic, ensuring that your frontend can talk to your backend, but your backend cannot initiate connections to the internet or your internal admin tools. Think of it as a firewall that moves with the application. Below is an example of a strict deny-all policy that should be the starting point for every namespace you create.
Default Deny Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Scanning for Vulnerabilities
Before you even deploy, run a scan. Here is the command you should be running in your CI pipeline right now using Trivy:
# Install Trivy (v0.32.0)
apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | tee -a /etc/apt/sources.list.d/trivy.list
apt-get update
apt-get install trivy
# Scan your image, failing on critical issues
trivy image --exit-code 1 --severity CRITICAL my-app:latest
Security is not a product you buy; it is a process you adhere to. It requires vigilance, clean code, and infrastructure that respects the boundaries of your data. While we can't write your code for you, we can provide the hardened KVM foundation that ensures your hard work doesn't collapse due to a weak hypervisor.
Don't let insecure infrastructure compromise your deployment. Spin up a secure, KVM-backed instance in Oslo with CoolVDS today and build on solid ground.