Console Login

Container Security in 2016: Why Isolation is an Illusion (And How to Fix It)

Stop Trusting the Daemon: A Guide to Container Hardening

Let’s be honest for a second. We are all rushing to containerize everything. I’ve seen monoliths broken down into microservices overnight, deployed via Docker 1.11, and shipped to production without a second thought about what happens when—not if—that container gets compromised.

Here is the cold, hard truth: Containers are not Virtual Machines.

When you spin up a standard VPS on CoolVDS, you get KVM virtualization. That means you have your own kernel. If you panic your kernel, my neighbor in the rack doesn't feel a thing. In a container, you are sharing the kernel with the host. One successful privilege escalation attack, and the attacker owns the entire node. With the recent invalidation of the Safe Harbor agreement and the brand-new EU-US Privacy Shield framework just adopted this month, security isn't just a technical requirement anymore—it is a legal minefield for any Norwegian business handling user data.

1. The Root of All Evil

By default, processes inside a Docker container run as root. If you map a volume from the host to the container, and the process inside writes a file, that file is owned by root on your host. That is terrifying.

I recently audited a setup for a client in Oslo where their web application was running as root inside a container. A vulnerability in the image processing library allowed remote code execution. Because they were root, the attacker could mount the host filesystem and wipe the database backups. Total disaster.

The Fix: Enforce User Context

Never let your production apps run as ID 0. Create a specific user in your `Dockerfile`.

FROM debian:jessie

# Create a group and user
RUN groupadd -r app && useradd -r -g app app

# Set ownership of the application directory
COPY . /usr/src/app
RUN chown -R app:app /usr/src/app

# Switch to the user
USER app

CMD ["node", "server.js"]

2. Drop Capabilities (You Don't Need Them)

Linux capabilities break down the power of root into distinct privileges. Does your Nginx container really need to change the system time (`CAP_SYS_TIME`) or load kernel modules (`CAP_SYS_MODULE`)? Absolutely not.

The philosophy here is the Principle of Least Privilege. We want to cripple the container's ability to harm the host.

Pro Tip: Start by dropping ALL capabilities and only adding back what is strictly necessary. This is the only way to sleep soundly at night.

When you run your container, pass these flags:

docker run --d -p 80:80 \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --name web-server \
  nginx:1.10

If you are using Docker Compose (which you should be for local dev), you can define this in your version 2 syntax:

version: '2'
services:
  web:
    image: nginx:1.10
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE

3. The Dirty Secret of "Latest"

Using `node:latest` or `ubuntu:latest` in production is a rookie mistake. "Latest" is a moving target. The `latest` tag today might be safe, but tomorrow it might include a breaking change or a vulnerable library version.

Always pin your images by the SHA256 digest. This ensures immutability. If you deploy to your CoolVDS staging environment today, you want the exact same bits landing on production tomorrow.

# Bad
FROM nginx:latest

# Better
FROM nginx:1.10.1

# Best (Cryptographically Guaranteed)
FROM nginx@sha256:0fe6413f3e30fcc5920bc8fa769280975b10b1c267b1bd95af69d829381408

4. Network Segmentation and Local Latency

Docker's default bridge network allows containers to talk to each other by IP. In a microservices architecture, your frontend shouldn't be able to talk directly to your database without going through the API gateway.

Furthermore, consider where your physical data lives. With the uncertainty surrounding US-hosted data after the Safe Harbor collapse, relying on US cloud giants is risky. Latency matters too. If your customers are in Norway, routing traffic through Frankfurt or London adds unnecessary milliseconds.

Hosting on CoolVDS in our Oslo data center keeps your traffic local (low latency) and under Norwegian jurisdiction (Datatilsynet compliant). We utilize standard KVM virtualization which provides a hard hardware abstraction layer. If you are running containers, run them inside a KVM instance for that extra layer of defense-in-depth.

5. Read-Only Filesystems

If an attacker compromises your application, their first move is often to download a payload or modify a configuration file. Make their life miserable by mounting the container's root filesystem as read-only.

docker run --read-only -v /run/app/data:/data:rw my-app

This forces you to be explicit about where data is written. It’s painful to set up initially, but it prevents an entire class of persistence attacks.

Summary: Defense in Depth

Security isn't a switch you flip. It's layers.

Layer Risk Solution
Application Code Vulnerabilities Static Analysis, Code Reviews
Container Privilege Escalation Drop Capabilities, Non-root User
Host/OS Kernel Panic / Shared Resource CoolVDS KVM Instances (Hardware Isolation)
Network DDoS / Snooping Private Networks, TLS everywhere

Containers are powerful, but they are leaky. Don't rely on the Docker daemon to protect you. Build your walls high, keep your kernels separate with KVM, and keep your data in Norway.

Ready to build a secure, compliant infrastructure? Deploy a high-performance NVMe instance on CoolVDS today and get root access in under 55 seconds.