Stop Trusting Default Container Configurations: A 2024 Survival Guide
Let's cut the pleasantries. If you are running Docker containers as root in production today, you aren't deploying software; you are deploying a liability. The illusion of isolation provided by namespaces and cgroups is thinner than most DevOps engineers admit. Just last week, at the end of January 2024, the community was rocked by the disclosure of CVE-2024-21626, dubbed "Leaky Vessels." This runc vulnerability allows a malicious container to overwrite the host filesystem simply because of a file descriptor leak.
I've spent the last decade cleaning up messy clusters across Europe. I've seen banking apps in Oslo fail penetration tests because a developer left a Kubernetes dashboard open to the world, and I've watched startups burn cash recovering from crypto-mining injections. Security isn't a product you buy; it's a discipline of reducing surface area.
This guide isn't about theoretical best practices. It is about the specific, battle-tested configurations we use to secure workloads, specifically for the Nordic market where GDPR and Datatilsynet (The Norwegian Data Protection Authority) compliance is non-negotiable.
1. The Root Cause: Drop Your Privileges
The most common sin in containerization is the default user. By default, a process inside a Docker container runs as PID 1 with root privileges. If that process breaks out (via a kernel exploit), the attacker has root on your host node. The fix is boring but essential: create a non-root user.
In your Dockerfile, stop doing this:
FROM node:20-alpine
WORKDIR /app
COPY . .
CMD ["npm", "start"]Do this instead. Force the UID/GID to a high number to avoid conflicts with host users:
FROM node:20-alpine
# Create a group and user with explicit IDs
RUN addgroup -S appgroup -g 10001 && \
adduser -S appuser -u 10001 -G appgroup
WORKDIR /app
# Change ownership of the application files
COPY --chown=appuser:appgroup . .
# Switch to the non-root user
USER 10001
CMD ["node", "index.js"]Pro Tip: When using Kubernetes, enforce this at the cluster level. A `PodSecurityAdmission` policy (which replaced `PodSecurityPolicy` in recent K8s versions) can reject any pod that tries to run as root.
2. Lock Down System Calls with Seccomp
Your Node.js API does not need to change the system clock. It does not need to load kernel modules. Yet, by default, it can try. Seccomp (Secure Computing Mode) acts as a firewall for system calls. Docker has a decent default profile, but for high-security environments—like handling payment data in Norway—you need to be stricter.
Here is how you run a container with a custom profile. First, verify what your container is actually allowed to do:
docker run --rm -it --security-opt seccomp=unconfined ubuntu:22.04 grep Seccomp /proc/self/statusIf you see Seccomp: 0, you are running naked. You want Seccomp: 2 (filtering).
For Kubernetes, you can apply a Seccomp profile via the Pod spec. Since Kubernetes v1.27, this has been stable. Here is a manifest that explicitly requires the default runtime profile, preventing any 'unconfined' mistakes:
apiVersion: v1
kind: Pod
metadata:
name: secure-backend-pod
labels:
app: payment-processor
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: api
image: coolvds-registry/payment-api:v2.4
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: trueNotice readOnlyRootFilesystem: true. This is a nightmare for attackers. Even if they get a shell, they cannot write their malware to disk.
3. Supply Chain Security: Trust Nothing
In 2024, we don't just worry about our code; we worry about what our code depends on. Software Bill of Materials (SBOM) is the buzzword of the year, but functionally, you need to be scanning images before they hit your runtime.
We use Trivy in our CI/CD pipelines. It catches OS vulnerabilities and language-specific dependency issues.
trivy image --severity HIGH,CRITICAL coolvds/internal-tool:latestIf you are deploying to a client in Bergen or Trondheim, they will ask about your patching cycle. Showing them an automated report generated by Trivy or Grype builds trust faster than any sales pitch.
4. Runtime Security: The Last Line of Defense
Static analysis is great, but what happens when a zero-day hits? You need runtime visibility. This is where eBPF (Extended Berkeley Packet Filter) shines. Tools like Falco listen to the kernel stream and alert on suspicious behavior in real-time.
Here is a custom Falco rule designed to detect if a shell is spawned in a container—a classic sign of a breach:
- rule: Terminal Shell in Container
desc: A shell was used as the entrypoint for a container.
condition: >
spawned_process and
container
and shell_procs and
proc.tty != 0 and
container_entrypoint
output: "Shell spawned in a container (user=%user.name container_id=%container.id image=%container.image.repository)"
priority: WARNINGImplementing this ensures that even if an attacker bypasses your WAF, you know the second they try to explore the filesystem.
5. The Infrastructure Layer: Why CoolVDS Wins on Security
You can harden containers all day, but if your underlying VPS is running on a shared kernel with weak isolation (like older OpenVZ implementations), you are building a castle on sand. This is where the architecture of CoolVDS makes a tangible difference.
We rely strictly on KVM (Kernel-based Virtual Machine). When you spin up a CoolVDS instance, you get a dedicated kernel. If a neighbor on the physical host crashes their kernel, your instance keeps humming. This hardware-level virtualization is critical for mitigating "noisy neighbor" issues and side-channel attacks.
Storage & Compliance
| Feature | Standard VPS | CoolVDS Architecture |
|---|---|---|
| Storage Backend | SATA / Hybrid SSD | Enterprise NVMe (Low Latency) |
| Virtualization | Container-based (LXC/OpenVZ) | Full KVM (Hardware Isolation) |
| Data Residency | Often unclear / Mixed EU | Strictly Norway (Oslo Data Centers) |
| DDoS Protection | Basic L3/L4 | Advanced L7 Mitigation |
For Norwegian businesses, the Schrems II ruling essentially mandates that personal data must be protected from extra-territorial surveillance. Hosting on CoolVDS ensures your data sits on servers physically located in Oslo, under Norwegian jurisdiction. We don't just route traffic here; the bytes live here.
6. Network Policies: Isolate the Blast Radius
By default, all pods in a Kubernetes cluster can talk to each other. That is a security flaw. If your frontend gets compromised, it shouldn't be able to scan your database directly. Use NetworkPolicies to whitelist traffic.
Here is a strict policy that denies all ingress traffic by default, forcing you to explicitly allow what is needed:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- IngressThen, allow traffic only from the ingress controller to your frontend:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-frontend
namespace: production
spec:
podSelector:
matchLabels:
app: frontend
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 80Conclusion: Performance Meets Security
Security often comes with a performance tax. Encryption costs CPU cycles. Seccomp filters add micro-latency. However, on high-performance infrastructure, this tax is negligible. CoolVDS NVMe instances are tuned to handle the I/O overhead of heavy logging and monitoring agents like Falco without choking your application throughput.
Don't wait for a ransom note to take container security seriously. Audit your Dockerfiles, enable Seccomp, and ensure your host infrastructure is as isolated as your containers claim to be.
Ready to harden your stack? Deploy a secure, KVM-based instance in Oslo on CoolVDS today. Experience raw NVMe power with the compliance peace of mind you need.