Console Login

Docker in Production: Why Your Default Container Config is a Security Nightmare Waiting to Happen

Stop Trusting Default Docker Settings: A Survival Guide for 2016

Let’s be honest. The moment Docker 1.0 dropped, we all stopped caring about dependency hell and started caring about how fast we could ship. I’ve seen it a dozen times this year alone: a developer spins up a MongoDB container, maps the ports to 0.0.0.0, mounts the docker socket, and calls it "production ready."

It is not ready. It is a ticking time bomb.

In the Norwegian hosting market, where we pride ourselves on stability and strict adherence to Datatilsynet’s guidelines, sloppy security doesn't fly. With the Safe Harbor agreement crumbling recently, data sovereignty is more critical than ever. If your container gets breached because you were too lazy to write a seccomp profile, that data leak is on you.

I’m going to show you how to lock this down. No fluff. Just the raw config flags and architectural decisions that separate the amateurs from the professionals.

The "Root" of All Evil

By default, the user inside a Docker container is root. If a malicious process breaks out of the container (and container breakout vulnerabilities are not a myth), they are root on your host. If you are running on cheap, shared-kernel hosting like OpenVZ, you are effectively handing the keys to the entire physical node to an attacker.

This is why at CoolVDS, we strictly use KVM virtualization. Even if a container is compromised, the attacker is trapped inside a hardware-virtualized kernel, not the host OS kernel. Isolation is not optional.

1. Drop Capabilities

Most web apps don’t need to modify kernel modules or bind to system ports. Yet, Docker gives them these capabilities by default. Stop it.

When you run your Nginx or Node.js container, drop everything and add back only what you need. Here is how I run a standard web frontend:

docker run -d \n  --name frontend-app \n  --cap-drop=ALL \n  --cap-add=NET_BIND_SERVICE \n  --cap-add=SETUID \n  --cap-add=SETGID \n  --security-opt=no-new-privileges \n  -p 80:80 \n  nginx:1.9

The --security-opt=no-new-privileges flag (introduced in Docker 1.10) is a lifesaver. It prevents processes inside the container from gaining new privileges during execution, even if setuid binaries are present.

Immutable Infrastructure: Read-Only Filesystems

I recall a specific incident last month involving a WordPress site hosted in Oslo. An attacker exploited a plugin vulnerability to write a PHP shell into the /var/www/html directory. Game over.

If that container had been read-only, the exploit would have failed. Containers should be ephemeral. If you need persistence, mount a volume. Everything else should be immutable.

Here is how you force immutability:

docker run -d \n  --read-only \n  --tmpfs /run \n  --tmpfs /tmp \n  -v /mnt/coolvds_vol/logs:/var/log/nginx \n  nginx:alpine
Pro Tip: NVMe storage is fast, but it doesn't fix bad architecture. On CoolVDS NVMe instances, we see IOPS hit the roof during builds, but runtime I/O should be minimal if you structure your logging correctly. Don't let your container thrash your disk. Use tmpfs for temporary state.

Network Segregation (The "Don't Talk to Strangers" Rule)

Linking containers with --link is deprecated. We are moving toward user-defined networks. If your database container is on the default bridge network, it's visible to every other container script kiddie running on that bridge. Isolate it.

Create a dedicated backend network that is not exposed to the public interface:

docker network create --driver bridge backend_net

Then attach your database, ensuring it binds only to the internal container IP, not the public interface of your VPS.

Example: Secure Postgres Configuration

Don't just rely on Docker networking. Configure your services to be paranoid. Inside your postgresql.conf (or mapped via config map), ensure you aren't listening globally if you don't have to.

# postgresql.conf snippet\nlisten_addresses = '*'  # Controlled via Security Groups/Docker Network\nport = 5432\nmax_connections = 100\nssl = on\nssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'\nssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'

Combined with CoolVDS's hardware firewall (which sits before the traffic even hits your eth0), this creates a defense-in-depth strategy. We’ve optimized our routing in the Oslo data center to ensure that internal VLAN traffic between your app and db containers incurs less than 0.5ms latency.

The Supply Chain Trap

Searching Docker Hub for "mysql" and pulling the first result with 5 stars is negligent. I’ve analyzed community images that contain outdated OpenSSL libraries vulnerable to Heartbleed-era exploits, or worse, hardcoded private keys.

The Rule: Only use Official images or build your own from alpine:3.3. Minimize the attack surface.

Base Image Size Vulnerability Risk Verdict
ubuntu:14.04 ~188 MB High (bloated) Avoid for microservices
debian:jessie ~125 MB Medium Acceptable
alpine:3.3 ~5 MB Low Recommended

Why Your Host OS Matters

You can configure Docker perfectly, but if the kernel underneath is old or shared, you lose. This is the Dirty Little Secret of the VPS industry in 2016. Many providers are still selling OpenVZ slices as "Cloud VPS." They oversell RAM and CPU cycles, leading to "noisy neighbor" issues that kill your application's consistency.

At CoolVDS, we use KVM. Each instance runs its own kernel. You can load your own modules. You can enable SELinux or AppArmor profiles for your Docker daemon without asking for permission. This level of control is mandatory for compliance with heavy frameworks like PCI-DSS or when handling sensitive Norwegian user data.

Auditing Your Config

Before you go live, audit your setup. There is a great tool gaining traction called the Docker Bench for Security. It checks for dozens of common best practices.

docker run -it --net host --pid host --cap-add audit_control \n    -v /var/lib:/var/lib \n    -v /var/run/docker.sock:/var/run/docker.sock \n    -v /usr/lib/systemd:/usr/lib/systemd \n    -v /etc:/etc --label docker_bench_security \n    docker/docker-bench-security

Run this. If you see red warnings regarding "Container User" or "Memory Limits," fix them.

Conclusion

Containerization is powerful, but it removes the safety nets that traditional VMs provided by default. You have to build the safety net yourself.

If you are serious about performance and security, you need a foundation that doesn't crumble under load. You need dedicated resources, KVM isolation, and a network backbone that understands the Nordic region's connectivity needs.

Don't let a default config compromise your infrastructure. Deploy a hardened KVM instance on CoolVDS today, and experience what true isolation feels like.