You are running a root exploit waiting to happen.
It is April 2022. If you are still deploying containers with default configurations, you haven't learned anything from the frantic weekends of December 2021. Log4Shell (CVE-2021-44228) wasn't just a Java vulnerability; it was a wake-up call that exposed how fragile our "isolated" environments actually are. I spent that Christmas patching systems for a major e-commerce client in Oslo, and the number of containers running as root with full CAP_SYS_ADMIN privileges was terrifying.
Containers are not Virtual Machines. They are processes masquerading as isolated units. Without strict boundaries, a container breakout isn't a probability; it's a certainty. Just last month, we saw Dirty Pipe (CVE-2022-0847) allowing unprivileged users to overwrite data in read-only files. If you were sharing a kernel on a budget hosting provider, your data was compromised. This is why at CoolVDS, we don't play games with shared kernels—every instance is a dedicated KVM slice.
Here is how we lock down container infrastructure for high-compliance Norwegian enterprises, moving beyond basic best practices to actual survival strategies.
1. The "Non-Root" Negotiable
By default, Docker containers run as root. If an attacker exploits a vulnerability in your application (like Log4j), they gain root access inside the container. If they then escape the container (via a kernel exploit like Dirty Pipe), they are root on the host. Game over.
You must enforce a non-root user. Do not just use the USER instruction; explicitly create a user with a known UID/GID.
The Secure Dockerfile Standard
Here is the pattern we enforce for all Node.js and Python services deployed on our infrastructure:
# STAGE 1: Builder
FROM node:16-alpine AS builder
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci --only=production
# STAGE 2: Runner
FROM node:16-alpine
WORKDIR /usr/src/app
# Create a dedicated user and group
# We use a specific ID (1001) to align with PSPs
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
COPY --from=builder /usr/src/app/node_modules ./node_modules
COPY . .
# STRICT PERMISSIONS
RUN chown -R nodejs:nodejs /usr/src/app
USER 1001
EXPOSE 3000
CMD ["node", "index.js"]This prevents the process from modifying its own filesystem binaries or installing packages at runtime.
2. Immutable Filesystems: Make it Read-Only
Persistence is the attacker's goal. If they can download a crypto miner or a reverse shell script, they win. If your container's filesystem is read-only, wget works, but writing the file fails.
In Docker, you can enforce this with a simple flag:
docker run --read-only -v /tmp_volume:/tmp my-secure-app
However, in a Kubernetes environment (which many of you are running on our NVMe VPS instances), you need to define this in the SecurityContext. We see too many developers ignore this because "the app crashes." Fix the app, not the security.
apiVersion: v1
kind: Pod
metadata:
name: secure-frontend
spec:
containers:
- name: nginx
image: nginx:1.21-alpine
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 101
allowPrivilegeEscalation: false
volumeMounts:
- mountPath: /var/cache/nginx
name: cache-volume
- mountPath: /var/run
name: run-volume
volumes:
- name: cache-volume
emptyDir: {}
- name: run-volume
emptyDir: {}Pro Tip: Nginx requires write access to/var/cacheand/var/run. MountemptyDirvolumes to these paths to allow the app to function while keeping the root filesystem locked (Immutable).
3. Capabilities: Drop 'Em All
Linux capabilities break down root privileges into small units. A web server does not need NET_ADMIN (altering network interfaces) or SYS_MODULE (loading kernel modules). Yet, Docker gives you a wide array by default.
We recommend a "whitelist" approach: Drop ALL, then add back only what is strictly necessary. Usually, NET_BIND_SERVICE is the only one you need if binding to port 80/443 (though modern kernels allow non-root bind starting at port 0 via sysctl).
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx
4. The Isolation Gap: Why "Shared Hosting" Kills Security
This is where the architecture matters. You can harden your container all day, but if the host kernel is compromised, your isolation is gone. In shared hosting environments (OpenVZ or LXC), you are sharing the kernel with neighbors who might be running vulnerable WordPress plugins from 2014.
CoolVDS uses KVM (Kernel-based Virtual Machine).
When you spin up a VPS Norway instance with us, you get a dedicated kernel. If a neighbor on the physical hypervisor gets hit with a dirty cow or dirty pipe exploit, they are trapped in their own virtualized memory space. They cannot touch your ring-0 processes.
| Feature | Standard Container Hosting | CoolVDS (KVM) |
|---|---|---|
| Kernel Isolation | Shared (High Risk) | Dedicated (Hardware Virtualization) |
| Exploit Blast Radius | Entire Node | Single VM |
| Performance | Noisy Neighbors | Dedicated NVMe I/O |
| Compliance (Schrems II) | Varies | Strictly Norway Data Centers |
5. Supply Chain: Trust Nothing
After SolarWinds, we can't trust upstream images blindly. Even the official library images can have vulnerabilities. In 2022, integrating a scanner like Trivy into your CI pipeline is mandatory.
Do not deploy if High or Critical CVEs are found. Here is a GitLab CI snippet we use for internal tooling:
container_scanning:
image:
name: aquasec/trivy:0.24.0
entrypoint: [""]
stage: test
script:
- trivy image --exit-code 1 --severity HIGH,CRITICAL $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
allow_failure: false
tags:
- docker-runnerThis fails the build immediately if a vulnerability is detected.
6. The Norwegian Context: Network & Data
Operating out of Norway gives us a unique advantage with the Datatilsynet's strict interpretation of GDPR. Data sovereignty is not a buzzword; it is a legal requirement.
By using CoolVDS, your data resides physically in Oslo. But you must ensure your container networking doesn't leak. Use Kubernetes NetworkPolicies to deny all ingress traffic by default, and whitelist only specific namespaces.
kubectl create -f default-deny.yaml
Don't be the admin who leaves the kubelet API (port 10250) exposed to the internet. We see this in scans constantly. Firewall it. On CoolVDS, our default security groups block this, but on unmanaged providers, you are on your own.
Conclusion: Paranoia is a Virtue
Security is not a product; it is a process of reducing surface area. By switching to non-root users, making filesystems read-only, and scanning images, you eliminate 90% of opportunistic attacks.
But for the remaining 10%—the kernel exploits and zero-days—you need architectural isolation. Do not run production containers on shared kernels. If you need low latency, data sovereignty, and the peace of mind that comes with KVM isolation, it is time to upgrade.
Secure your stack today. Deploy a hardened KVM instance on CoolVDS in under 55 seconds and stop worrying about your neighbors.