The Container Hype Train has No Brakes, but You Need Airbags
It is October 2016. If you have attended any tech meetup in Oslo recently, the room was likely buzzing about Docker 1.12 and the new Swarm mode. Developers are ecstatic. They can package their spaghetti code into a nice little box and ship it. Works on my machine, works on production, right? Wrong.
As a sysadmin who has spent the last decade cleaning up after "move fast and break things" deployments, I look at the current state of container security and I get nervous. The default Docker configuration prioritizes usability over security. If you run docker run blindly in production, you are handing a loaded gun to potential attackers. We are seeing a massive shift in the European hosting landscape, especially here in Norway where data integrity is scrutinized by Datatilsynet. You cannot afford a breach just because you were too lazy to configure cgroups.
The Shared Kernel Fallacy
The biggest lie in the industry right now is that containers are "lightweight VMs." They are not. They are processes on steroids. They share the host kernel. If a process inside a container manages to trigger a kernel panic, your whole server goes down. If they find a privilege escalation exploit in the Linux kernel—and let's be honest, 2016 has been rife with them—they own the host. They own your data. They own your customers.
This is why at CoolVDS, we are adamant about the infrastructure layer. We don't oversell "container hosting" on shared bare metal kernels. We provide KVM-based VPS Norway instances. KVM gives you a hardware-level virtualization boundary. If you run Docker inside a KVM instance, an attacker breaking out of the container is still trapped inside the VM. It’s defense in depth. You need that layer of isolation, especially with the upcoming General Data Protection Regulation (GDPR) looming over the EU.
1. Drop Capabilities (Don't Be Root)
By default, Docker containers run with a terrifying amount of privilege. They retain capabilities like NET_RAW (allowing packet crafting) or SYS_CHROOT. Most web applications do not need these. They just need to listen on a port and write to a log file. The Golden Rule of 2016 DevOps: Least Privilege.
Stop running containers as root. If you must, drop the capabilities they don't need. Here is how I run a standard Nginx container when I actually care about security:
docker run -d \
--name secure-web \
--read-only \
--cap-drop ALL \
--cap-add NET_BIND_SERVICE \
--cap-add SETUID \
--cap-add SETGID \
--tmpfs /run \
--tmpfs /tmp \
nginx:1.10-alpineNotice the --read-only flag? That makes the container's root filesystem immutable. If an attacker manages to exploit a PHP vulnerability, they cannot download a rootkit or modify system binaries because the disk won't let them write. We map /tmp and /run to memory (tmpfs) because some services need a scratchpad, but that data disappears on restart.
2. Isolate User Namespaces
If you are not using User Namespaces (userns) in Docker 1.11+, you are wrong. Without this, UID 0 (root) inside the container is UID 0 on the host. If they break out, they are root on your server.
With User Namespaces, you map the container's root user to a high-number non-privileged user on the host. It’s a pain to set up, but it is necessary. You need to edit your Docker daemon configuration. Since we are all moving to systemd on Ubuntu 16.04 and CentOS 7, here is the clean way to do it.
Configuring daemon.json
Create or edit /etc/docker/daemon.json. This file is the future of Docker config, replacing those messy /etc/default/docker environment variables.
{
"userns-remap": "default",
"ipv6": false,
"icc": false,
"no-new-privileges": true,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}Pro Tip: Settingicc(Inter-Container Communication) tofalseprevents containers from talking to each other unless explicitly linked. This stops lateral movement if one service gets compromised.
3. Network Segmentation and Latency
Docker's default bridge network is convenient, but it performs NAT (Network Address Translation). In high-load environments, that user-land proxy (docker-proxy) eats CPU cycles and adds latency. For the "Performance Obsessive" crowd reading this: every millisecond of latency kills your conversion rate.
If you are hosting on CoolVDS, you have access to blazing fast NVMe storage and high-bandwidth uplinks. Don't bottle-neck that with bad network config. For high-throughput services, consider using --net=host, but be aware of the security trade-off (no network isolation). A better middle ground in 2016 is creating specific bridge networks for application stacks.
docker network create --driver bridge --subnet 172.18.0.0/16 app_tier
docker network create --driver bridge --subnet 172.19.0.0/16 db_tierPut your web server on both. Put your database only on db_tier. Now the internet cannot route directly to your MySQL port, even if you accidentally exposed it. This internal segmentation is critical for compliance.
4. The Host OS: Less is More
The safest container host is one that does nothing else. Do not run your email server, your FTP, and your Docker host on the same OS. Use a minimal OS. Alpine Linux is great inside the container, but for the host, a stripped-down CentOS 7 or Ubuntu 16.04 server edition is standard.
You need to audit the host itself. Auditd is your friend. If you are serious about logging (and you should be, considering the shifting legal landscape in Europe regarding data trails), set up audit rules to watch the Docker daemon.
Audit Rules for Docker (`/etc/audit/audit.rules`)
# Audit docker daemon
-w /usr/bin/docker -k docker
-w /var/lib/docker -k docker
-w /etc/docker -k docker
-w /usr/lib/systemd/system/docker.service -k docker
-w /etc/default/docker -k dockerRestart the audit daemon. Now you have a forensic trail if someone tries to tamper with your container runtime.
Why Infrastructure Matters
You can tweak JSON files all day, but physical location and hardware quality are the foundation of security. Latency to the Norwegian market matters. Data residency matters. Hosting your containers on a budget provider with spinning rust (HDD) in a data center halfway across the world is a liability.
CoolVDS offers managed hosting options and raw VPS power with DDoS protection included. We use KVM because we believe in hard isolation. We use NVMe because we hate I/O wait. When you deploy a container on our infrastructure, you aren't fighting for disk IOPS with 500 other users. You get dedicated resources.
Security isn't a product you buy; it's a process you follow. Start by locking down your daemon, segmenting your networks, and choosing a host that understands the technical demands of 2016.
Ready to build a fortress? Deploy a high-performance, KVM-isolated instance on CoolVDS today and sleep better tonight.