Console Login

Container Security in 2023: Stop Running as Root and Start Locking Down

Container Security: Hardening Production Workloads in 2023

I still see it every day. A glossy CI/CD pipeline, automated testing, and a Kubernetes cluster that scales automatically. It looks professional. Then I check the Dockerfile, and there it is: line one, FROM node:18. No user definition. The application runs as root.

In May 2023, treating containers like lightweight Virtual Machines is not just a bad habit; it is a liability. Containers are processes. They share the host kernel. If you are deploying containers in Norway or anywhere in Europe without understanding the isolation boundaries, you are one kernel exploit away from a data breach notification to Datatilsynet.

We are going to fix that. Today, we look at hardening the container supply chain, runtime security, and why your underlying infrastructure (the VPS) matters more than your Docker config.

The "Root" of All Evil

By default, a process inside a Docker container runs as PID 1 with root privileges. While there are namespace mappings, a breakout means the attacker has root on your host node. The fix is boring but essential: never run as root.

Update your Dockerfiles to create a specific user.

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Tell docker that all future commands should run as the appuser
USER appuser

However, simply changing the user isn't enough for Kubernetes environments. You must enforce this at the Pod level using the securityContext. In Kubernetes 1.26 and 1.27 (the current standards), Pod Security Policies (PSP) are gone, replaced by Pod Security Standards (PSS) and Admission Controllers.

Here is how a production-ready Deployment manifest should look in 2023. Note the explicit drop of Linux capabilities.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: secure-backend
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
        runAsNonRoot: true
      containers:
      - name: backend
        image: my-registry/backend:v1.4.2
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop:
            - ALL
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: tmp-volume
          mountPath: /tmp
      volumes:
      - name: tmp-volume
        emptyDir: {}

This configuration does three critical things: prevents privilege escalation, forces the filesystem to be read-only (preventing attackers from downloading malware), and drops all Linux capabilities, adding back only what is strictly necessary.

The Supply Chain: Trust Nothing

In 2023, the threat isn't just someone hacking your server; it's someone injecting code into the libraries you use. Log4j taught us that. You need to scan your images before they ever touch your cluster.

Tools like Trivy or Grype should be blocking your CI pipelines if vulnerabilities are found. Do not rely on Docker Hub's default scanning alone.

Pro Tip: Use "Distroless" images from Google or Chainguard. They contain only your application and its runtime dependencies. No shell, no package manager, no noise. If an attacker gets in, they can't even run ls.

Here is a snippet for a GitHub Actions workflow that breaks the build if High severity vulnerabilities are found:

name: Security Scan
on: [push]

jobs:
  trivy-security:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Build an image from Dockerfile
        run: docker build -t my-app:${{ github.sha }} .

      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@0.10.0
        with:
          image-ref: 'my-app:${{ github.sha }}'
          format: 'table'
          exit-code: '1'
          ignore-unfixed: true
          vuln-type: 'os,library'
          severity: 'CRITICAL,HIGH'

Network Segmentation: The Firewall Inside

By default, all pods in a Kubernetes cluster can talk to each other. Your frontend can talk to your database. Your logging agent can talk to the billing service. This is a flat network, and it is dangerous.

If you are hosting on CoolVDS, you benefit from our hardware-level firewalling and DDoS protection at the edge. But inside the cluster, you need NetworkPolicies. Think of this as the internal firewall.

A "Default Deny" policy is your best friend. It ensures that no traffic flows unless you explicitly allow it.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Once that is applied, nothing works. Panic sets in. Then, you selectively open ports.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

The Compliance Trap: GDPR and Schrems II

Technical security means nothing if you are legally exposed. Since the Schrems II ruling, relying on US-owned cloud providers for processing European personal data has become a legal minefield. Transfer mechanisms are under constant scrutiny.

This is where infrastructure choice becomes a security feature. Hosting in Norway, on Norwegian-owned infrastructure like CoolVDS, simplifies your compliance posture. Your data resides in Oslo. It falls under Norwegian jurisdiction and GDPR, not the US CLOUD Act.

Furthermore, local peering matters. Connecting to the Norwegian Internet Exchange (NIX) means your traffic to Norwegian users often never leaves the country, reducing both latency and interception risk.

Infrastructure Isolation: Containers vs. VMs

Containers provide soft isolation using cgroups and namespaces. Virtual Machines provide hard isolation using hypervisors. This distinction is vital.

If you have high-security requirements, you should not trust multi-tenant container platforms where you don't control the kernel. The safest architecture is running your container orchestrator (like K3s or MicroK8s) on top of dedicated KVM-based Virtual Private Servers.

At CoolVDS, we use KVM (Kernel-based Virtual Machine) for all instances. This ensures that your memory and CPU instructions are isolated at the hardware virtualization level. Noisy neighbors can't steal your CPU cycles, and a kernel panic in a neighbor's VM won't crash your workload.

Quick Configs for Immediate Impact

1. Nginx Header Hardening: Don't leak server info.

server_tokens off;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";

2. Seccomp Profiles: Restrict syscalls.

annotations:
  seccomp.security.alpha.kubernetes.io/pod: runtime/default

3. AppArmor Loading:

# Load a profile on the host before running the container
apparmor_parser -r -W /etc/apparmor.d/containers/docker-default

Conclusion

Security is not a product; it's a process of layer reduction. You reduce the attack surface by using minimal images. You reduce privilege by dropping root. You reduce network visibility with policies. And finally, you reduce infrastructure risk by choosing a provider that offers true KVM isolation and NVMe performance without the noisy neighbor issues.

Don't wait for a CVE to force your hand. Start by hardening your base images today.

Need a sandbox to test these configurations? Deploy a KVM-based instance on CoolVDS in under 55 seconds and lock it down.