Console Login

Docker is Not a Security Strategy: Hardening Containers for Production in 2019

Stop Trusting Default Configurations

Let's be brutally honest: docker run is not a security policy. If you are deploying containers into production using default settings, you aren't just opening a door; you're taking the hinges off. I've spent the last six months cleaning up a mess for a fintech client in Oslo who thought "isolation" meant running everything as root inside a container. It didn't end well.

In 2019, container adoption is exploding, but our security practices are stuck in 2015. We treat containers like lightweight VMs. They aren't. They are processes on a shared kernel. If that kernel has a vulnerability—and after the Meltdown and Spectre panic of 2018, we know it might—your isolation is theoretical at best.

Here is how we lock down the stack, from the image build to the bare metal (or in this case, the KVM layer).

1. The Root Cause: Drop Privileges Immediately

By default, a process inside a Docker container runs as root. If an attacker manages to break out of the container runtime—a scenario that happens more often than Docker Inc. likes to admit—they are root on your host node. Game over.

Stop writing Dockerfiles like this:

FROM node:10
COPY . /app
CMD ["node", "index.js"]

This runs as UID 0. Instead, create a specific user and switch to it. It adds three lines to your config and saves you a lifetime of headaches.

FROM node:10-alpine
WORKDIR /app
COPY . .
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Tell Docker to switch to this user
USER appuser
CMD ["node", "index.js"]

When you deploy this on a CoolVDS NVMe instance, even if the container runtime is compromised, the attacker finds themselves trapped in a restricted user context on a hardened Linux OS.

2. Immutable Infrastructure: Read-Only Filesystems

In a proper microservices architecture, your containers should be stateless. If they are stateless, why does the application need write access to its own binary folders? It doesn't.

An attacker can't install a backdoor or crypto-miner if the filesystem denies write access. Force read-only mode at runtime. If your app needs to write temp files (like logs or pid files), mount a tmpfs volume for those specific paths.

Here is how you do it with the CLI:

docker run --read-only \
  --tmpfs /run \
  --tmpfs /tmp \
  -d my-secure-app

Or in your Kubernetes YAML (since K8s 1.12+ makes this easier):

securityContext:
  readOnlyRootFilesystem: true

3. Kernel Capabilities: Drop 'Em All

Linux capabilities break down the power of root into distinct privileges. Does your Nginx web server need to audit system logs or load kernel modules? Absolutely not. Yet, by default, Docker grants a wide array of these capabilities.

The most secure approach is a whitelist: drop everything, then add back only what is strictly necessary. This is known as the Principle of Least Privilege.

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx

This command strips all privileges but allows the process to bind to a port lower than 1024 (like port 80). If an exploit hits Nginx, the damage radius is severely limited.

Pro Tip: Use --security-opt=no-new-privileges to prevent processes from gaining more privileges during execution (e.g., via setuid binaries). It’s a simple flag that blocks entire classes of escalation attacks.

4. The Isolation Fallacy: Why You Need KVM

Containers share the host kernel. This is great for efficiency but terrifying for security. A kernel panic in one container crashes the whole node. A kernel exploit in one container exposes all neighbors.

This is where the underlying infrastructure matters. At CoolVDS, we don't oversell bare-metal container hosting where you share a kernel with strangers. We provide KVM-based VPS instances.

KVM (Kernel-based Virtual Machine) provides hardware-level virtualization. Your VPS has its own kernel, totally separate from other tenants. You can run your Docker or Kubernetes clusters inside your CoolVDS instance. If a neighbor on the physical host gets hacked, your memory space and kernel are protected by the hypervisor. It's the only way to run containers securely in a multi-tenant environment.

Comparison: Container vs. KVM Isolation

FeatureDocker ContainerCoolVDS KVM VPS
KernelShared with HostDedicated Kernel
Attack SurfaceHigh (Syscalls)Low (Hypervisor)
Boot TimeMillisecondsSeconds (~55s on CoolVDS)
Security LevelProcess IsolationHardware Virtualization

5. Norwegian Data Sovereignty & GDPR

It has been less than a year since GDPR went into full effect (May 2018), and we are already seeing the Datatilsynet (Norwegian Data Protection Authority) ramp up audits. If you are storing customer data—names, IPs, logs—inside a container, you need to know exactly where that physical drive spins.

US-based cloud providers are under scrutiny. The "Privacy Shield" framework is shaky ground. For Norwegian businesses, the safest bet is keeping data on Norwegian soil.

CoolVDS operates out of Oslo. Your data sits on NVMe storage within the jurisdiction of Norway. When you define your storage volumes in Docker:

docker volume create --driver local --opt type=none --opt device=/mnt/nvme_data --opt o=bind my_secure_data

You can tell your Data Protection Officer (DPO) with 100% certainty that the bytes are physically located in Oslo, minimizing latency to your local users and maximizing legal compliance.

6. Network Policies: Don't Talk to Strangers

By default, all containers on a bridge network can talk to each other. Your database shouldn't be chatting with your frontend load balancer on arbitrary ports. If you are running a simple Docker Compose setup, create specific networks.

version: '3'
services:
  web:
    networks:
      - front-tier
  db:
    networks:
      - back-tier

networks:
  front-tier:
  back-tier:

For those of you scaling up to Kubernetes (1.13 is solid right now), implement NetworkPolicy resources to whitelist traffic. Deny all ingress by default, then open specific paths.

Final Thoughts

Security isn't a product you buy; it's a configuration you maintain. Containers offer incredible velocity for DevOps teams, but they strip away the cozy blanket of virtualization we got used to. You have to build the warmth back in yourself.

Start with a secure foundation. Don't run your clusters on shared hosting or oversold budget buckets. You need dedicated resources and hardware isolation.

Ready to harden your stack? Spin up a KVM-based instance in Oslo. Test your latency and I/O speeds. If you need a secure, compliant home for your containers, check out the CoolVDS High-Performance NVMe Plans.