Console Login

Container Security in 2025: Stop Treating Your Cluster Like a VM Farm

Container Security in 2025: Stop Treating Your Cluster Like a VM Farm

Let’s be honest. Most of you are running Docker or Kubernetes containers as root. I see it in audits every week. You pull an image from Docker Hub, slap it onto a production server, and pray the isolation holds.

It won't.

If the 2024 xz-utils backdoor taught us anything, it's that supply chains are fragile. Containers share the host kernel. That is their efficiency superpower, but it is also their Achilles' heel. If an attacker escapes a container in a shared hosting environment, they aren't just in your pod; they are potentially probing the host OS.

I have spent the last decade architecting systems across the Nordics, from high-frequency trading platforms in Oslo to data-sovereign health archives. The rules have changed. Here is how we secure containers in 2025, keeping performance high and the Datatilsynet (Norwegian Data Protection Authority) happy.

1. The Hard Truth About Isolation (And Why KVM Matters)

Containers are essentially fancy Linux processes using namespaces and cgroups. They are not VMs. If you run a container on a cheap, oversold VPS provider using container-based virtualization (like OpenVZ/LXC), a kernel panic in a neighbor's container can take you down.

This is where the infrastructure choice becomes a security decision. You need a hard boundary.

Architect's Note: We built CoolVDS on KVM (Kernel-based Virtual Machine) for this exact reason. Even if you are running a Kubernetes cluster inside your VPS, you want that VPS to have its own dedicated kernel. KVM ensures that your memory pages and CPU instructions are hardware-isolated from other tenants on the physical node. Do not compromise on this.

2. Supply Chain: Trust Nothing, Verify Everything

In 2025, pulling latest is negligence. You need to pin digests and scan specifically for the vulnerabilities that matter to your stack.

We use tools like Trivy or Grype in the CI pipeline, but you must configure them to fail the build. Do not just log warnings. If a critical CVE is found, the pipeline dies. Period.

The Minimalist Base Image

Stop using ubuntu:24.04 for a Go binary. Use distroless images. Shell access is a vulnerability, not a feature.

# BAD PRACTICE
FROM node:22
WORKDIR /app
COPY . .
CMD ["node", "server.js"]

# BETTER PRACTICE (2025 Standard)
FROM node:22-alpine AS build
WORKDIR /app
COPY . .
RUN npm ci --omit=dev

FROM gcr.io/distroless/nodejs22-debian12
COPY --from=build /app /app
WORKDIR /app
CMD ["server.js"]

By removing the shell, you make it significantly harder for an attacker to run arbitrary commands even if they exploit an RCE in your application.

3. Runtime Security with eBPF

Static analysis is fine, but what happens when a zero-day hits? You need runtime visibility. By 2025, eBPF has matured from a buzzword to a requirement for serious production environments.

We use Falco to monitor syscalls in real-time. It sits at the kernel level (which, again, requires a proper KVM environment like CoolVDS to load custom modules or leverage eBPF probes effectively).

Here is a Falco rule we deploy to detect if a binary is modified effectively alerting on potential ransomware or unauthorized updates:

- rule: Write below binary dir
  desc: an attempt to write to any file below a set of binary directories
  condition: >
    bin_dir and evt.dir = < and open_write
    and not package_mgmt_procs
    and not coreos_write_ssh_dir
    and not exe_running_docker_save
  output: "File below a known binary directory opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)"
  priority: CRITICAL

When this fires, we don't just log it. We isolate the pod immediately.

4. Locking Down the Kubernetes Context

If you are orchestrating containers, your securityContext is your first line of defense. By default, Kubernetes is too permissive.

In a recent project migrating a fintech workload to a Norwegian data center, we enforced the following policy. If a manifest didn't have this, the OPA Gatekeeper rejected it.

apiVersion: v1
kind: Pod
metadata:
  name: secured-app
spec:
  containers:
  - name: main
    image: my-app:1.4.5
    securityContext:
      allowPrivilegeEscalation: false
      runAsNonRoot: true
      runAsUser: 10001
      readOnlyRootFilesystem: true
      capabilities:
        drop:
        - ALL
        add:
        - NET_BIND_SERVICE

Breakdown:

  • allowPrivilegeEscalation: false: Prevents the process from gaining more rights than it started with (blocking setuid binaries).
  • readOnlyRootFilesystem: true: Attackers can't download rootkits if they can't write to the disk. You must mount a remote volume for persistent data.

5. The Norwegian Context: Latency and Legality

Security isn't just code; it's physics and law. For our clients operating out of Oslo or Stavanger, data residency is paramount under GDPR and Schrems II interpretation.

When you host on hyperscalers, you often lose transparency on exactly where the data physically resides or which legal entity controls the encryption keys.

Using a local provider with strict NVMe storage ensures two things:

  1. Compliance: Data stays within Norwegian jurisdiction.
  2. Performance: Latency to the Norwegian Internet Exchange (NIX) is often under 2ms.

We tested database I/O on CoolVDS NVMe instances against general-purpose cloud tiers. The difference isn't subtle. When you are running security scans (like ClamAV or runtime I/O watchers) on every file access, slow disk I/O kills your application's responsiveness.

Metric Standard Cloud HDD CoolVDS NVMe
Rand Read IOPS 400 - 600 15,000+
Sequential Write 80 MB/s 1,200 MB/s
Latency (Oslo) ~15ms (routed via Stockholm) ~2ms (Local)

Conclusion

Container security is a discipline of layers. You strip the container image, you restrict the runtime kernel calls, and you isolate the network. But ultimately, your container is only as secure as the hypervisor it runs on.

Don't let noisy neighbors or shared-kernel vulnerabilities compromise your stack. Build on a foundation that respects isolation.

Ready to lock it down? Deploy a KVM-isolated, NVMe-powered instance on CoolVDS today and test your latency from Oslo.