Console Login

Container Security in 2018: A Sysadmin’s Guide to Locking Down Docker on Linux

Stop Running Containers as Root: A Survival Guide for Nordic Ops

Let’s be honest. You pulled a Docker image from the Hub, wrote a quick docker-compose.yml, and pushed it to production. It works. The developers are happy. But if you are running that container with default settings, you haven't just deployed an application; you have potentially handed over root access to your host system.

We are seeing a massive shift in 2018. Everyone is moving from monolithic architectures to microservices, and Kubernetes is winning the orchestration war against Swarm. But security is lagging behind. In the rush to ship, we are forgetting that a container is just a process with some cgroups and namespaces tape-wrapped around it. It shares the kernel. If that kernel panics, or if a syscall allows an escape, your host is toast.

I’ve spent the last month auditing infrastructure for a mid-sized fintech in Oslo. They were GDPR compliant on paper, but their container runtime was wide open. Here is how we fixed it, and how you can prevent Datatilsynet (The Norwegian Data Protection Authority) from handing you a fine.

1. The "Root" of All Evil

By default, the user inside a Docker container is root (uid 0). If an attacker compromises your application (say, via a Struts vulnerability), they are effectively root inside the container. If they manage a container breakout—like the Dirty COW exploit we saw recently—they are root on the host.

The Fix: Create a specific user in your Dockerfile. Never let the process run as UID 0.

FROM alpine:3.8

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Move to working directory
WORKDIR /app

# Copy binary/source
COPY . .

# Change ownership
RUN chown -R appuser:appgroup /app

# Switch user
USER appuser

CMD ["./my-app"]

2. Drop Capabilities Like They’re Hot

Linux kernel capabilities break down the privileges of the root user into distinct units. Does your Nginx web server need to modify kernel modules? No. Does your Node.js app need to change the system time? Absolutely not.

By default, Docker drops many dangerous capabilities, but it keeps enough to be dangerous. You should operate on a whitelist basis: drop everything, then add back only what is necessary.

Here is how a battle-hardened docker run command looks:

docker run --d \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --read-only \
  --tmpfs /tmp \
  --name secure-nginx \
  nginx:mainline-alpine

Breakdown:

  • --cap-drop=ALL: Strips all privileges.
  • --cap-add=NET_BIND_SERVICE: Allows binding to ports below 1024 (like port 80).
  • --read-only: Mounts the container's root filesystem as read-only. Hackers can't install backdoors if they can't write to disk.

3. The Infrastructure Layer: Why KVM Matters

This is where your choice of hosting provider becomes a security decision. Many budget VPS providers use OpenVZ or LXC containers to host your containers. This is "Inception" style nesting: containers inside containers sharing a single kernel with hundreds of other customers.

If a neighbor on that noisy node triggers a kernel panic or exploits a kernel vulnerability, your data is at risk. It’s a violation of isolation principles.

Pro Tip: Always run container workloads on KVM-based virtualization. KVM provides a hardware-assisted virtualization layer. Each VPS has its own kernel. This is the standard architecture at CoolVDS. We don't oversell, and we don't share kernels. When you spin up an instance in our datacenter, you get true isolation.

4. Network Segmentation and Local Latency

Don’t expose the Docker socket (/var/run/docker.sock) to containers unless you absolutely trust them. It’s effectively root access without a password. Also, ensure your containers are not listening on 0.0.0.0 if they only need to talk to other containers.

Use User-Defined Bridges:

# Create a private network
docker network create --driver bridge --subnet 172.18.0.0/16 backend_net

# Connect db only to backend
docker run -d --net backend_net --name db postgres:9.6

Regarding latency: If your users are in Oslo or Stockholm, hosting in US-East is killing your application's responsiveness. The speed of light is a hard limit. With GDPR now in full effect (as of May 25th), keeping data within the EEA is not just good for ping times; it's often legally required.

5. Verify Your Images

Supply chain attacks are real. In 2018, we've seen crypto-miners injected into public images on Docker Hub. Do not use :latest. It is not reproducible, and you don't know what you are getting.

Use Docker Content Trust to verify the publisher:

export DOCKER_CONTENT_TRUST=1
docker pull alpine:3.8

The Bottom Line

Security is a trade-off between convenience and risk. Dropping capabilities and making filesystems read-only breaks things. It requires testing. But the alternative is a compromised server and a very awkward conversation with your CTO.

You need a solid foundation. You need dedicated resources, NVMe storage for fast I/O (essential when running multiple containers doing disk writes), and true virtualization isolation. CoolVDS provides the raw KVM power and low latency connectivity required for serious Docker and Kubernetes deployments in the Nordics.

Don't let your infrastructure be the weak link. Secure your configurations, and deploy on a host that respects isolation.

Ready to harden your stack? Deploy a KVM NVMe instance on CoolVDS today and test your secure Docker configs in a safe environment.