Console Login

Hardening Containers in 2023: Production-Grade Security for Norwegian Infrastructure

The "Root" of All Evil: Why Your Containers Are Not Secure by Default

Let’s cut the marketing fluff. If you are running Docker or Kubernetes with default settings, you are essentially handing the keys to your infrastructure to the first script kiddie who finds an exploit in your dependency tree. I've seen it happen. A simple Node.js application, unpatched for three weeks, running as root inside a container. One RCE (Remote Code Execution) later, the attacker wasn't just in the container; they were probing the host kernel.

In the Norwegian tech scene, where we pride ourselves on reliability and strict adherence to privacy laws, this negligence is unacceptable. Whether you are deploying to a cluster in Oslo or a distributed setup across the Nordics, security happens in layers. The most critical layer? The virtualization beneath your containers.

1. The Fallacy of Namespace Isolation

Containers are not Virtual Machines. They share the host kernel. If you are using a cheap VPS provider that oversells resources and uses shared-kernel virtualization (like OpenVZ), a kernel panic in a neighbor's container can take your production DB down. This is why CoolVDS exclusively uses KVM (Kernel-based Virtual Machine). We provide a hardware-level abstraction. Your kernel is yours.

Pro Tip: Always verify your isolation level. Run uname -r inside your container and on the host. If they are identical and you cannot load your own kernel modules, you are on shared hosting, not a true VPS. Move immediately.

2. Supply Chain: Trust No One

In 2023, the attack vector has shifted left. It's not just about your code; it's about the base image. If you are pulling node:latest or ubuntu:latest, you are pulling in hundreds of megabytes of vulnerabilities.

Use minimal images like Alpine or, better yet, Distroless. They lack shells, making it significantly harder for an attacker to pivot if they do gain entry.

Scanning Before Deployment

You cannot fix what you cannot see. Integate a scanner like Trivy into your CI/CD pipeline before the image ever hits your CoolVDS instance.

# The wrong way: Deploying blind
docker run my-app:latest

# The 2023 Standard: Scan first
trivy image --severity HIGH,CRITICAL my-app:latest

# If using CI (GitLab CI example snippet)
container_scanning:
  script:
    - trivy image --exit-code 1 --severity CRITICAL $IMAGE_TAG

3. Runtime Hardening: Drop Those Capabilities

By default, Docker grants a container specific Linux capabilities (like CHOWN, NET_RAW). Most web apps do not need these. The principle of least privilege applies here strictly.

Here is how you secure a deployment configuration for a standard Nginx service. We drop all capabilities and only add back NET_BIND_SERVICE if absolutely necessary (though you should ideally run on high ports).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: secure-nginx
spec:
  template:
    spec:
      containers:
      - name: nginx
        image: nginx:1.25-alpine
        securityContext:
          allowPrivilegeEscalation: false
          runAsUser: 101 # nginx user
          runAsGroup: 101
          runAsNonRoot: true
          readOnlyRootFilesystem: true
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        volumeMounts:
        - mountPath: /var/cache/nginx
          name: cache-volume
        - mountPath: /var/run
          name: run-volume
      volumes:
      - name: cache-volume
        emptyDir: {}
      - name: run-volume
        emptyDir: {}

Notice the readOnlyRootFilesystem: true line. This prevents an attacker from downloading a crypto-miner or a rootkit script to the filesystem. If they can't write, they can't persist.

4. Network Policies: The Firewall Inside the Cluster

On a standard network, you have firewalls. In Kubernetes, pods talk to each other freely by default. This is dangerous. If your frontend gets compromised, it shouldn't be able to scan your internal database ports.

Implement a default "Deny All" policy. This ensures that only explicitly allowed traffic flows.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

When you run this on CoolVDS, our high-performance NVMe storage ensures that the overhead from these kernel-level packet filters (iptables/IPVS) remains negligible. Low latency is critical when every packet is being inspected.

5. The Norwegian Context: Data Sovereignty & GDPR

We are operating in a post-Schrems II world. Relying on US-based cloud providers for sensitive Norwegian user data involves complex legal gymnastics (Standard Contractual Clauses). Datatilsynet (The Norwegian Data Protection Authority) has been clear: you must know where your data lives physically.

Hosting containers on CoolVDS provides a distinct advantage here:

  • Data Residency: Your volumes reside on NVMe arrays physically located in Oslo.
  • Latency: If your user base is in Norway, routing traffic through Frankfurt or London adds 20-30ms of unnecessary latency. Local hosting keeps RTT (Round Trip Time) under 5ms for most domestic users.
  • Compliance: It is easier to demonstrate compliance when you have direct contract relationships with a Norwegian/European host rather than an opaque hyperscaler.

6. Resource Quotas: Preventing the "Noisy Neighbor" Effect

Even if you secure the perimeter, a memory leak can kill your node. In K8s, requests and limits are not optional; they are mandatory for stability.

Without limits, a single pod can consume all available RAM, invoking the OOMKiller which might terminate critical system processes (like `kubelet` or `sshd`).

resources:
  requests:
    memory: "64Mi"
    cpu: "250m"
  limits:
    memory: "128Mi"
    cpu: "500m"

Final Thoughts: The Infrastructure Matters

You can write the most secure Dockerfile in the world, but if the underlying server is compromised or unstable, it means nothing. Security is not a feature you install; it is an architectural discipline.

At CoolVDS, we don't just sell VPS instances. We provide the isolated, high-performance KVM foundations that allow battle-hardened DevOps teams to build secure container platforms. We handle the hardware, the NVMe arrays, and the network DDoS protection, so you can focus on your `securityContext`.

Is your infrastructure leaking data? Audit your cluster today, and if you need a pristine, isolated environment to test your hardened configs, spin up a CoolVDS instance. It takes less than 60 seconds.