Console Login

Container Security in 2018: Why Your Docker Strategy is a Ticking Time Bomb

Container Security in 2018: Why Your Docker Strategy is a Ticking Time Bomb

Let’s be honest for a second. Most of the Docker containers running in production today are security nightmares wrapped in a fancy YAML file. I’ve audited three Kubernetes clusters in Oslo this month alone, and every single one of them was running containers as root. In a post-Meltdown world, that is not just negligent; it is professional suicide.

We need to stop treating containers like lightweight Virtual Machines. They aren't. They are processes sharing a kernel. If that kernel panics, or if a syscall isn't properly namespaced, your isolation is gone. With the GDPR enforcement that kicked in this past May, the Datatilsynet (Norwegian Data Protection Authority) isn't going to accept "we thought Docker was secure by default" as a valid excuse for a data breach.

1. The Root Addiction: Just Stop It

The default behavior of Docker is to run processes as root. This makes development easy and production dangerous. If an attacker manages to break out of the container (which, given the recent runC vulnerabilities, is a non-zero probability), they are root on your host. Game over.

You need to create a specific user in your Dockerfile. Do not rely on the host's UID mapping unless you have configured User Namespaces (which, let's face it, most of you haven't because it breaks volume mounting half the time).

The Fix

Here is how we handle this in our base images at CoolVDS. We set a static GID/UID to ensure consistency across the fleet.

FROM alpine:3.8

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Tell Docker to use this user
USER appuser

# Now this runs as non-root
ENTRYPOINT ["/app/run.sh"]

2. Capability Dropping: The Kernel Diet

By default, Docker grants a significant list of capabilities to a container, including NET_BIND_SERVICE, CHOWN, and SETUID. Does your Node.js API really need to change file ownerships? Probably not.

In 2018, the philosophy is simple: Deny everything, then allow only what is necessary. We use the --cap-drop flag relentlessly.

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE --name web-app coolvds/nginx:latest
Pro Tip: If you are unsure which capabilities your application actually needs, don't guess. Use strace during a staging run to see which syscalls are failing when you drop caps. It is tedious, but it beats a privilege escalation exploit.

3. The Immutable Infrastructure Promise

I see developers SSH-ing into containers to "fix" config files. This defeats the entire purpose of containerization. If a container is compromised, you want it to be useless to the attacker. That means Read-Only filesystems.

When we deploy high-security financial workloads for our clients in Bergen, we force the container filesystem to be read-only. Any data that needs to persist goes to a mounted volume, preferably an external block storage unit if you are on a proper provider like CoolVDS.

docker run --read-only -v /run/app/temp:/tmp:rw my-app

This breaks 90% of exploit kits that try to download and execute scripts to /var/www or /app.

4. Network Policies and The Local Context

Norway has excellent connectivity. Latency from Oslo to the rest of Europe is negligible, but that pipe goes both ways. If you leave your management ports open, the entire internet will rattle your doorknob.

If you are using Kubernetes (1.11 or 1.12), you should be defining NetworkPolicies. By default, K8s allows all pods to talk to all pods. If your frontend gets popped, the attacker can port scan your database directly. Lock it down.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-access
  namespace: production
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 5432

5. The Infrastructure Layer: Why CoolVDS Matters

All these software configurations are useless if the underlying hardware is oversubscribed or insecure. This is where the "noisy neighbor" problem becomes a security issue, not just a performance one.

Shared hosting or budget VPS providers often disable hardware-level virtualization features to pack more tenants onto a single hypervisor. This exposes you to side-channel attacks.

At CoolVDS, we took a different route. We utilize KVM (Kernel-based Virtual Machine) for strict isolation. Even if you are running Docker inside a CoolVDS instance, that instance is hardware-isolated from other tenants. We combine this with local NVMe storage arrays. Why? Because when you are scanning container images for vulnerabilities (using tools like Clair or Anchore), I/O becomes your bottleneck.

Performance meets Compliance

With GDPR now fully active, data residency is critical. Hosting your containers on a US-controlled cloud might subject you to the CLOUD Act, complicating your compliance posture. CoolVDS infrastructure is physically located in Norway. Your data stays here, protected by Norwegian privacy laws.

Final Thoughts

Security isn't a product you buy; it's a process you suffer through. In 2018, the tools are there. Docker 18.06 is stable. Kubernetes is maturing. But the defaults are still dangerous.

Don't wait for a breach to audit your stack. Start by dropping root, dropping capabilities, and moving your workloads to infrastructure that respects isolation.

Ready to harden your infrastructure? Spin up a KVM-isolated, NVMe-powered instance on CoolVDS today. We handle the hardware security so you can focus on the code.