Hardening Containers: Stop Trusting Default Configurations
Let’s be honest. Most developers treat Docker containers like lightweight virtual machines. They aren't. They are processes on a host kernel, separated only by namespaces and cgroups. If you are running containers with default settings in production today, you aren't just taking a risk; you are practically inviting a privilege escalation attack.
I recently audited a Kubernetes cluster for a fintech startup in Oslo. They were proud of their microservices architecture but terrified of Schrems II implications. When I looked at their deployment manifests, I saw runAsUser: 0 everywhere. Root. If an attacker compromises that node application, they are root inside the container. If they find a kernel vulnerability (like the Dirty Pipe exploit we saw earlier this year), they are root on the host node. Game over.
Security isn't a product you buy; it's a configuration you enforce. Here is how to lock down your container infrastructure in late 2022 without destroying developer velocity.
1. The "Root" of All Evil
By default, a container process runs as root. This is convenient for apt-get install, but catastrophic for runtime security. The principle of least privilege dictates that your application should never have more permissions than it strictly needs.
You must create a specific user in your Dockerfile. Don't rely on the orchestrator to handle this alone; bake it into the image.
# The Wrong Way
FROM node:16-alpine
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
# The Right Way
FROM node:16-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY . .
# Change ownership before switching user
RUN chown -R appuser:appgroup /app
USER appuser
CMD ["node", "index.js"]
When you deploy this on Kubernetes, enforce it via the SecurityContext. If a developer forgets the USER directive, the pod should refuse to start.
Pro Tip: In Kubernetes 1.25, Pod Security Policies (PSP) are deprecated. You should be migrating to Pod Security Standards (PSS) using the built-in admission controller. Don't wait until 1.26 drops support completely.
2. Immutable Infrastructure: Read-Only Filesystems
If an attacker manages to inject a shell script into your container, their next step is usually to download a payload or modify a configuration file. Make that impossible. Mount the container's root filesystem as read-only.
Most applications only need to write to specific directories (like /tmp or /var/log). Mount those as emptyDir volumes.
Docker CLI Example
docker run --read-only \
--tmpfs /run \
--tmpfs /tmp \
-v my-vol:/var/lib/myapp:Z \
my-secure-image
Kubernetes Configuration
This is how it looks in a production manifest. Note the drop capabilities section as well—we'll get to that.
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
containers:
- name: node-app
image: my-repo/node-app:v1.2
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
volumeMounts:
- mountPath: /tmp
name: tmp-volume
volumes:
- name: tmp-volume
emptyDir: {}
3. Kernel Isolation and the "Noisy Neighbor" Risk
This is where the infrastructure provider matters. If you are using shared container hosting (CaaS) or cheap VPS providers that rely on container-based virtualization (like OpenVZ or LXC), you are sharing a kernel with other customers. A kernel panic triggered by a neighbor brings your site down. A kernel exploit breaks isolation.
At CoolVDS, we don't play that game. We use KVM (Kernel-based Virtual Machine) hardware virtualization. Every VPS Norway instance you spin up has its own isolated kernel. Even if you run containers inside your CoolVDS instance, the attack surface is limited to your VM, not the physical host managed by us.
For Norwegian businesses dealing with Datatilsynet, this distinction is vital. Proving true data segregation is easier when you have a dedicated OS kernel.
4. Dropping Linux Capabilities
The root user in Linux is actually a collection of capabilities (like CAP_CHOWN, CAP_NET_ADMIN). Docker grants a restricted subset by default, but it's still too much for a web server.
Does your Nginx container need to change system time? No. Does it need to load kernel modules? Absolutely not. Drop everything, then add back only what is necessary.
To audit what capabilities your process is actually using, you can use tools like capsh inside the container during testing:
grep CapEff /proc/self/status
5. Supply Chain Security: Trust Nothing
Pulling FROM node:latest is reckless. You have no idea what changed between yesterday and today. Pin your image digests.
Furthermore, you must scan images for vulnerabilities before they hit your cluster. In our CI/CD pipelines, we use Trivy. It’s fast, open-source, and catches OS-level CVEs and language-specific dependency issues.
# Scanning an image with Trivy (v0.34.0)
trivy image --severity HIGH,CRITICAL coolvds/internal-api:v2.4.1
If you find a critical vulnerability in `openssl` (like the one from November 2022), you patch it immediately. Do not deploy until the scan passes.
6. Network Policies: The Internal Firewall
By default, all pods in a Kubernetes cluster can talk to each other. Your frontend can talk to your database, but so can your logging agent and your metrics collector. If the metrics collector is compromised, can it dump your database?
Use NetworkPolicies to whitelist traffic. Deny all ingress by default.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Why Infrastructure Choice Underpins Security
You can have the most secure Dockerfile in the world, but if your network latency causes timeouts during SSL handshakes, or your storage I/O bottlenecks during a DDoS attack, your security posture crumbles under load.
We built CoolVDS on pure NVMe storage because security scanners, log aggregators, and databases generate massive random I/O. Slow disks kill monitoring tools. Our datacenters in Oslo provide single-digit millisecond latency to NIX, ensuring that your security headers and WAF rules are processed instantly.
Performance & Compliance Matrix
| Feature | Generic Cloud VPS | CoolVDS NVMe Instance |
|---|---|---|
| Virtualization | Often OpenVZ/Container (Shared Kernel) | KVM (Dedicated Kernel) |
| Storage | SATA SSD or Shared SAN | Local NVMe RAID 10 |
| Data Residency | Unclear (Often routed via Frankfurt) | Strictly Norway (GDPR Compliant) |
Security requires depth. It starts with your code, extends to your container configuration, and rests firmly on the isolation provided by your infrastructure.
Don't let a misconfiguration become a headline. harden your manifests, scan your images, and run them on hardware that respects your need for isolation.
Ready to test your hardened stack? Deploy a KVM-isolated instance on CoolVDS today. Experience the raw power of NVMe with the peace of mind of Norwegian data sovereignty.