Container Security is a Minefield: Hardening Docker for Production in 2017
Let’s be honest: docker run is addictive. It’s fast, it’s reproducible, and it solves the "works on my machine" dilemma. But in the rush to ship microservices, too many engineering teams in Oslo and Bergen are deploying containers with the security posture of a wet paper bag. I have audited environments where production databases were running as root inside containers, exposing the entire host to privilege escalation attacks.
If you are treating containers like lightweight Virtual Machines, you are going to get burned. Containers are just processes with a fancy view of the operating system. They share the kernel. If that kernel has a vulnerability (remember Dirty COW from last year?), and your isolation is weak, your data is gone.
With GDPR enforcement looming in 2018, the Norwegian Data Protection Authority (Datatilsynet) won't accept "but it was in a container" as an excuse for a data breach. Here is how we lock down Docker infrastructure effectively, using tools and methods available right now.
1. Stop Running as Root. Period.
By default, Docker containers run as root. This is a design flaw that favors usability over security. If an attacker manages to break out of the container (via a kernel exploit or a misconfiguration), they are root on your host server. Game over.
I still see Dockerfile definitions that look like this:
FROM node:7
COPY . /app
CMD ["node", "index.js"]
This runs your Node application with UID 0. Do not do this. Create a specific user and switch to it. It takes two lines of code to mitigate 90% of privilege escalation attacks.
FROM node:7-alpine
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY . .
# Change ownership
RUN chown -R appuser:appgroup /app
# Switch to non-root user
USER appuser
CMD ["node", "index.js"]
Pro Tip: In your CI/CD pipeline, grep for USER instructions. If a Dockerfile lacks one, fail the build. It is harsh, but it keeps your infrastructure clean.
2. Drop Linux Capabilities
The Linux kernel breaks down root privileges into distinct units called "capabilities". A standard web server container does not need to change the system clock (CAP_SYS_TIME) or load kernel modules (CAP_SYS_MODULE). Yet, Docker gives containers a broad set of these by default.
The safest approach is whitelisting: drop everything, then add back only what is strictly necessary. We use this approach for all customer-facing services hosted on our internal CoolVDS clusters.
docker run --d \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--name web_server \
nginx:1.12
If you are using Docker Compose (v2 or v3), you can define this in your YAML:
version: '3'
services:
nginx:
image: nginx:1.12
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
ports:
- "80:80"
3. The "No-New-Privileges" Flag
Even if you run as a non-root user, binaries with the setuid bit set (like sudo or ping) can elevate privileges during execution. Since Docker 1.10, we have had a powerful security option called no-new-privileges. It prevents a process from gaining more privileges than its parent, effectively neutralizing setuid binaries.
Pass this flag at runtime:
docker run --security-opt=no-new-privileges:true ...
4. Isolation Matters: The Case for KVM over OpenVZ
This is where the infrastructure layer becomes critical. Many budget VPS providers in Norway are still reselling OpenVZ containers. In an OpenVZ environment, you are essentially running a container inside a container. You share the host kernel with every other customer on that physical box.
Why is this bad for Docker?
- Kernel Panics: If a neighbor triggers a kernel panic, your server goes down.
- Incompatibility: You cannot load specific kernel modules required for advanced networking (like Overlay networks) because you don't own the kernel.
- Security: Shared kernel exploits affect everyone.
At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine) for our VPS Norway instances. KVM provides hardware-level virtualization. Your VPS has its own dedicated kernel. If you want to enable SELinux or AppArmor specifically for your Docker daemon, you can. If you need to patch your kernel immediately after a CVE disclosure without waiting for us, you can.
| Feature | Shared Kernel (OpenVZ/LXC) | Dedicated Kernel (CoolVDS KVM) |
|---|---|---|
| Kernel Isolation | No (Shared) | Yes (Dedicated) |
| Docker Compatibility | Limited | Full |
| Neighbor Risk | High | Near Zero |
5. Read-Only Filesystems
Immutability is a core tenant of modern DevOps. Once a container is running, it should not need to write to its own filesystem, except for specific directories like /tmp or /run. By mounting the root filesystem as read-only, you prevent attackers from downloading malicious scripts or modifying configurations.
docker run --read-only \
--tmpfs /run \
--tmpfs /tmp \
my-app:latest
This simple flag breaks a surprising amount of malware scripts that assume they can write to /var/www or /bin.
6. Network Segmentation
The deprecated --link flag is dead. Do not use it. Use user-defined bridge networks. By default, all containers sit on the bridge0 interface and can talk to each other via IP. This allows an attacker who compromises your frontend web server to probe your database directly.
Isolate your stacks. Your Redis cache should only accept traffic from the API container, not the public internet.
# Create isolated network
docker network create backend-net
# Attach containers
docker run -d --net=backend-net --name db mysql:5.7
docker run -d --net=backend-net --name api my-api
Conclusion: Infrastructure is the First Line of Defense
Securing docker run commands is mandatory, but running secure containers on insecure infrastructure is futile. Low latency to the NIX (Norwegian Internet Exchange) is great for performance, but true isolation is required for security.
When you deploy on CoolVDS, you aren't just getting NVMe storage and high-speed uplinks; you are getting a KVM environment where you control the kernel, the modules, and the security policies. Don't let a shared-kernel environment be your single point of failure.
Ready to harden your stack? Deploy a KVM-based VPS on CoolVDS in under 55 seconds and take control of your kernel.