Container Security in 2014: Why Your Docker Strategy Might Be Dangerous
Let’s be honest: 2014 has been a nightmare for sysadmins. First, we dealt with Heartbleed in April. Then, just as we caught our breath, Shellshock hit us in September. Now, everyone is rushing to deploy Docker 1.3 because it “simplifies deployment,” but few are talking about what happens when—not if—a container gets compromised.
I see it every day in the DevOps community here in Oslo. Developers hand over a Dockerfile and assume it’s a sandbox. It is not. Containers do not contain. Unlike the KVM virtualization we use at CoolVDS, which provides a hardware-level boundary, standard Linux containers (LXC/Docker) share the host kernel. If you are running a container as root on a shared kernel, you are one ioctl call away from a disaster.
The "Root" of the Problem
The Docker daemon runs as root. This is the single biggest security risk in the current container ecosystem. If a malicious actor breaks out of the container, they don't just get access to a virtual machine; they potentially get root access to the host server. In a shared hosting environment (like old-school OpenVZ providers), this is catastrophic.
Until user namespaces mature in the upstream kernel, you need to be aggressive about dropping capabilities. By default, Docker grants a wide array of capabilities that web applications simply do not need.
Hardening Runtime Privileges
Don't run containers with default flags. Strip everything and add back only what is necessary. Here is how I deploy Nginx containers to handle traffic coming from NIX (Norwegian Internet Exchange):
# The wrong way (far too common)
docker run -d -p 80:80 nginx
# The Battle-Hardened way
docker run -d \
--name web_nix_01 \
--read-only \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--cap-add=SETGID \
--cap-add=SETUID \
-v /var/www/html:/usr/share/nginx/html:ro \
-p 80:80 \
nginx
Notice the --read-only flag? It forces the container's root filesystem to be read-only. If an attacker exploits a vulnerability in your web app, they cannot write a backdoor to the system files. This is basic hygiene that 90% of deployments miss.
Network Isolation and the Iptables Headache
Docker manages iptables rules dynamically, which often conflicts with existing firewall scripts (like ufw or custom chains). In a production environment, you cannot rely on Docker's default bridge network to provide security between containers.
If you have a database container linked to a web container, you must ensure that only the web container can talk to the database port. The legacy --link feature is convenient, but for true security, we need to look at the raw iptables.
# Check your forwarding chain
iptables -L FORWARD -n -v
# Restrict traffic between containers on the default bridge (docker0)
# Drop inter-container communication unless explicitly allowed
iptables -I DOCKER-USER -i docker0 -o docker0 -j DROP
Pro Tip: Never expose your database ports (3306, 5432) to the public interface (0.0.0.0). Bind them to the specific internal IP of your virtual interface. On CoolVDS instances, we recommend setting up a private VPN interface (tun0) for administrative access rather than opening SSH to the world.
The Storage IO Bottleneck
Security often comes with a performance cost. When you use Copy-on-Write (CoW) filesystems like AUFS or DeviceMapper (which Docker uses by default), high-write applications (like MySQL or MongoDB) can suffer severe latency penalties. This latency is exacerbated if your underlying host is running on spinning rust (HDD).
To mitigate this, you should bypass the CoW driver for data directories using Data Volumes. However, the physical disk speed matters enormously here.
Optimizing MySQL on Docker
# my.cnf optimization for 2014 hardware
[mysqld]
innodb_buffer_pool_size = 2G
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2 # Slight risk, better performance
innodb_flush_method = O_DIRECT
Running this on a standard VPS with magnetic storage will result in I/O wait times that kill your SEO efforts. This is where hardware choice becomes a security feature: Availability is part of the CIA triad. If your server locks up due to I/O wait during a DDoS attack, you are insecure.
CoolVDS uses pure SSD arrays with high-throughput controllers. While traditional VPS providers in Europe are still rotating platters at 7200 RPM, we are delivering random I/O performance that allows your database to write logs instantly, reducing the window of vulnerability during crash recovery.
Data Sovereignty and Datatilsynet
We operate under the Norwegian Personal Data Act (Personopplysningsloven). If you are storing customer data for Norwegian businesses, you need to know exactly where that data lives. Using public cloud containers where data might drift across borders is a compliance risk.
When you use a container, you are abstracting the OS, but the data physically resides on a disk. If that disk is shared with a "noisy neighbor" who is under investigation, your service could suffer collateral damage. This is why KVM (Kernel-based Virtual Machine) is superior to OpenVZ/LXC for commercial hosting.
| Feature | Shared Kernel (OpenVZ/LXC) | Hardware Virtualization (KVM/CoolVDS) |
|---|---|---|
| Kernel Isolation | None (Shared with Host) | Complete (Separate Kernel) |
| Docker Support | Limited / Difficult | Native / Full Support |
| Security Risk | High (Kernel Panic kills all) | Low (Isolated environment) |
| Swap Management | Often unavailable | Full control |
The Verdict: Virtualize First, Containerize Second
Docker is a fantastic tool for application packaging, but in late 2014, it is not yet a security boundary you can trust with your life. The safest architecture is to wrap your containers inside a true Virtual Machine.
By deploying on CoolVDS, you get a dedicated KVM instance. You own the kernel. You own the firewall tables. If you want to run Docker inside that, go ahead—you are protected from the other tenants on the physical node. Don't risk your reputation on shared-kernel hosting just to save a few Kroner.
Secure your infrastructure today. Deploy a KVM-based, SSD-powered instance on CoolVDS in under 55 seconds and build your Docker registry on solid ground.