Console Login

Container Security in 2018: Hardening Docker for Production in the Norwegian Enterprise

Stop Treating Containers Like Light-Weight VMs

If I see one more root-running container in a production environment, I might just unplug the server myself. It is 2018, and the rush to containerize everything from monolithic CMS installations to microservices in Oslo's tech hubs is moving faster than security teams can audit. We are trading uptime for deployment speed, and eventually, the debt comes due.

Here is the hard truth: Containers do not provide hard isolation. They are processes with namespaces and cgroups limitations. They share the host kernel. If you are running a multi-tenant environment on bare metal without proper syscall filtering, a kernel panic in one container takes down the whole ship. Worse, with the Spectre and Meltdown vulnerabilities revealed earlier this year, shared kernel memory is a battlefield we are all still learning to map.

In this guide, we are going to lock down a Docker environment using tools and techniques available right now. No theory, just survival tactics.

The "Dirty Image" War Story

Three months ago, a client migrated a Magento stack to a Docker Swarm cluster. To save time, a junior developer pulled a pre-baked mysql-optimized image from a public Docker Hub repository instead of the official library. It had five stars. It also had a modified entrypoint script that piped environment variables (including AWS keys and DB credentials) to a remote server in Eastern Europe.

We caught it because we monitor outbound traffic on port 53 and 443 strictly. But the lesson stands: Trust nothing you didn't build yourself.

1. The Root Problem (Literally)

By default, processes inside a Docker container run as root. If a vulnerability like the dirty copy-on-write (Dirty COW) exploit surfaces, an attacker breaking out of the container lands on your host as root. Game over.

You must enforce user switching in your Dockerfile. This is non-negotiable for any service facing the public internet.

FROM alpine:3.8

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Install dependencies
RUN apk add --no-cache nginx

# Tell Docker to switch context
USER appuser

CMD ["nginx", "-g", "daemon off;"]

If you are using legacy apps that demand root to bind to port 80, stop. Use a reverse proxy or configure sysctl to allow non-root binding. Do not compromise security for convenience.

2. Capabilities: Drop 'Em All

The Linux kernel divides root privileges into distinct units called capabilities. A web server does not need NET_ADMIN (network configuration) or SYS_MODULE (loading kernel modules). Yet, Docker gives them by default.

The most secure approach is a whitelist strategy: drop everything, then add back only what is strictly necessary. Here is how I run Nginx containers:

docker run -d \n  --name secure-web \n  --cap-drop=ALL \n  --cap-add=NET_BIND_SERVICE \n  --cap-add=CHOWN \n  --cap-add=SETGID \n  --cap-add=SETUID \n  nginx:1.15-alpine

This reduces the attack surface by approximately 90%. Even if an attacker compromises the Nginx process, they cannot mess with the host's audit logs or modify network interfaces.

Pro Tip: Use --read-only whenever possible. This mounts the container's root filesystem as read-only. If an attacker manages to run a script, they cannot write it to disk to establish persistence. You will need to mount tmpfs volumes for places that need writing, like /run or /tmp.

3. The GDPR Angle: Data Residency & Isolation

Since May 25th, GDPR has changed the landscape for us in Norway. Datatilsynet is not interested in your excuses about "ephemeral infrastructure." If you are processing personal data of EU citizens, you need to know exactly where that data lives and who can access the physical hardware.

This is where the architecture of your VPS provider matters. Many budget providers use OpenVZ or LXC (Linux Containers) to oversell resources. In those setups, you are essentially in a container next to other customers, sharing the same kernel. From a security and compliance standpoint, this is a nightmare.

At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine). Each VPS has its own dedicated kernel. This provides a hard hardware virtualization layer between your Docker host and our infrastructure. If a neighbor's container initiates a kernel panic or attempts a Spectre-based side-channel attack, your data remains isolated in your own memory space.

Norwegian Latency & Security

Security is also about availability (the 'A' in CIA triad). If you are hosting mission-critical applications for Norwegian users in a datacenter in Frankfurt or Amsterdam, you are adding 20-30ms of latency and crossing multiple borders. Hosting locally in Oslo ensures your data stays within Norwegian jurisdiction and keeps latency under 5ms for local users.

4. Runtime Security & Auditing

Static analysis isn't enough. You need to know what's happening now. In 2018, the toolset is maturing. I highly recommend looking at Sysdig Falco. It parses system calls from the kernel and alerts you on suspicious behavior.

Here is a basic Falco rule to detect a shell spawning inside a container—something that should never happen in production:

- rule: shell_in_container
  desc: notice shell activity within a container
  condition: >
    container.id != host and
    proc.name = bash and
    evt.dir = <
  output: "Shell spawned in a container (user=%user.name container_id=%container.id image=%container.image.repository)"
  priority: WARNING

Implementing this gives you a fighting chance to kill a compromised container before data exfiltration begins.

5. Network Segmentation

Do not use the default docker0 bridge for everything. If one container gets compromised on the default bridge, it can ARP spoof and sniff traffic from other containers. Create user-defined networks.

# Create an isolated network
docker network create --driver bridge --subnet 172.18.0.0/16 isolated_backend

# Run DB only on this network
docker run -d --net isolated_backend --name mongo_db mongo:3.6

This ensures that your public-facing web container can talk to the database, but random other containers cannot probe it.

Summary: The Defense in Depth Approach

LayerActionTool/Flag
HostHard IsolationCoolVDS KVM / NVMe
KernelLimit Syscalls--cap-drop / Seccomp
ImageVulnerability ScanClair / Anchore
NetworkSegmentationUser-defined networks
RuntimeAuditSysdig Falco

Container security in 2018 requires vigilance. The tools are there, but the defaults are dangerous. By stripping privileges, enforcing user isolation, and choosing a hosting provider that understands the difference between "soft" container isolation and "hard" KVM virtualization, you protect your infrastructure and your customers.

Do not let a default configuration be the reason you end up in a Datatilsynet report. Build it right, lock it down, and host it where it matters.

Ready to harden your stack? Deploy a KVM-based, NVMe-powered instance on CoolVDS today and get root access in under 55 seconds.