Container Security: Locking Down Docker 1.11 on Production Systems
Letâs be honest. The excitement around Docker in the Oslo tech scene right now is palpable. I was at a meetup in GrĂźnerløkka last week, and everyone is "containerizing" everything. Monoliths are being chopped up, and microservices are the new religion.
But there is a dirty secret nobody mentions in the README.md.
Most of you are deploying containers that are essentially root-access backdoors into your host servers.
I have spent the last month auditing infrastructure for a mid-sized Norwegian ecommerce platform. They were proud of their CI/CD pipeline. I was terrified. They were mounting /var/run/docker.sock into publicly accessible web containers. One remote code execution (RCE) vulnerability in their web app, and the attacker would own the entire host node.
If you value your uptimeâand your standing with Datatilsynet (The Norwegian Data Protection Authority)âyou need to stop treating containers like lightweight VMs. They are shared-kernel processes. Here is how we secure them on CoolVDS infrastructure.
1. The "Root" of All Evil
By default, the user inside a Docker container is root. If that process breaks out of the container (and kernel exploits happen), they are root on your host. This is a disaster waiting to happen.
In Docker 1.10+, and refined in the current 1.11 release, we finally have stable User Namespaces. This allows the root user inside the container to map to a non-privileged user on the host.
You need to edit your Docker daemon configuration. On Ubuntu 16.04 (Xenial), which uses systemd, this isn't just in /etc/default/docker anymore. You often need to override the systemd unit.
Configuration Fix:
# /etc/docker/daemon.json
{
"userns-remap": "default"
}
When you restart the daemon, Docker creates a user named dockremap. Now, if an attacker breaks out, they find themselves as a nobody user with zero permissions on the host file system.
2. Dropping Capabilities (The Blunt Force Approach)
Does your Nginx container really need to modify kernel modules? No. Does your Redis instance need to audit system logs? Absolutely not.
The Linux kernel divides root privileges into distinct units called capabilities. Docker drops some by default, but not enough. The "paranoia mode" approachâwhich I recommendâis to drop everything and add back only what is strictly necessary.
Here is how you should run a web server container:
docker run -d \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--cap-add=SETUID \
--cap-add=SETGID \
--name secure-nginx \
nginx:1.10
This command strips the container of the ability to change file ownership, insert kernel modules, or reboot the system. It can only bind to a network port and manage its own user IDs.
Pro Tip: If you are unsure which capabilities your app needs, run it with--cap-drop=ALLin a staging environment and watch the logs. When it crashes, checkdmesgor/var/log/syslogto see what permission was denied.
3. Limiting Resources to Prevent DoS
A "Fork Bomb" inside a container can bring down a host server in seconds if you don't enforce limits. This is noisy neighbor syndrome, and while CoolVDS isolates your VPS with KVM to prevent other customers from impacting you, your own containers can still impact each other.
Never run a container without memory and CPU limits. Never.
docker run -d \
--memory="512m" \
--memory-swap="1g" \
--cpuset-cpus="0,1" \
--pids-limit=100 \
my-app:latest
The --pids-limit flag is particularly useful (added recently). It prevents a process from spawning infinite child processes, effectively neutralizing fork bombs.
4. Read-Only Filesystems
Immutability is the core concept of modern infrastructure. Once a container is running, it shouldn't be patching itself or writing config files. If an attacker compromises your application, the first thing they want to do is download a payload or modify a binary.
Make their life miserable by mounting the root filesystem as read-only:
docker run --read-only \
-v /my/data:/data:rw \
-v /tmp:/tmp:rw \
my-app
With this flag, the attacker cannot write to /bin, /usr, or /etc. You explicitly whitelist the directories that need write access (like /data or /tmp) using volumes.
5. The CoolVDS Architecture Advantage
There is a debate in the hosting world right now: OpenVZ/LXC vs. KVM.
Many budget providers use OpenVZ. In that setup, you are effectively running containers inside a container. All customers share the same kernel. If a kernel exploit is found (and they are found all the time), your data is at risk from other customers on the node.
This is why CoolVDS exclusively uses KVM (Kernel-based Virtual Machine).
When you spin up a CoolVDS instance, you get your own dedicated kernel. You can enable SELinux, load your own modules, and configure your Docker daemon exactly how you want without asking permission. It provides the hard isolation required for GDPR compliance and serious production workloads.
Network Segregation
Don't rely on the default Docker bridge (docker0) for everything. Create segmented networks for your database and your web tier.
# Create a backend network
docker network create --driver bridge --internal backend-net
# Connect your database (no internet access)
docker run -d --net=backend-net --name db mysql:5.7
# Connect your web app (internet + backend access)
docker run -d -p 80:80 --net=backend-net --name web my-web-app
In this setup, the database has no route to the outside world. Even if someone forgets to configure the firewall, the Docker network driver prevents external routing to the DB container.
Summary
The "it works on my machine" mentality doesn't fly when you are handling customer data. Norway's privacy laws are strict, and your reputation is fragile.
- Use User Namespaces.
- Drop Capabilities.
- Enforce Resource Limits.
- Use Read-Only filesystems where possible.
- Host on KVM, not shared kernels.
Security adds overhead. It requires faster I/O to handle the logging and encryption layers without lag. That is why we built CoolVDS on pure NVMe storage. We give you the raw speed you need to run secure configurations without the performance penalty.
Need a safe sandbox to test your hardened Docker configurations? Deploy a KVM instance on CoolVDS in under 55 seconds.