You Are Probably Doing Docker Wrong. Let's Fix It.
It is December 31, 2018. We have survived the year of Meltdown and Spectre, GDPR enforcement began back in May, and yet, when I SSH into client servers, I still see the same terrifying thing: dockerd running everything as root. If you are running containers in production without a security strategy, you aren't doing DevOps; you are just automating your own breach.
I have spent the last decade keeping servers alive across the Nordics, from high-traffic media sites in Oslo to financial data processors in Stockholm. I love containers. Docker 18.09 has made life easier. But convenience is the enemy of security. Containers share the host kernel. If you let a process break out of its namespace, it is game over.
Here is how to harden your container stack for 2019, keeping your latency low and your TCO lower.
1. The "Root" of All Evil
By default, a process inside a Docker container runs as PID 1 with UID 0 (root). If an attacker manages a remote code execution (RCE) exploit within your Node.js or Python app, they are root inside the container. If they then find a kernel vulnerability (like the Dirty COW exploit we saw a couple of years ago), they can break out to the host with root privileges.
The Fix: Create a user. It is three lines of code.
FROM alpine:3.8
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Tell Docker to switch to this user
USER appuser
# Run the application
ENTRYPOINT ["./my-app"]
Pro Tip: Never map the Docker socket (/var/run/docker.sock) inside a container unless you absolutely know what you are doing. Giving a container access to the socket is effectively granting full root access to the host system.
2. Drop Linux Capabilities
The Linux kernel divides the privileges traditionally associated with superuser into distinct units, known as capabilities. Does your Nginx web server need to load kernel modules or change the system time? No.
By default, Docker grants a significant list of capabilities. You should adopt a "deny all, permit some" approach. We drop everything and add back only what is necessary.
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE --name web-server my-nginx-image
This command ensures that even if an attacker gains control, they cannot mess with the network stack or system audit logs.
3. Read-Only Filesystems
Immutability is a core tenet of modern infrastructure. Your application logic should not be rewriting its own binaries. If a hacker gets in, the first thing they want to do is download a payload or overwrite a script.
Make their life miserable by mounting the root filesystem as read-only.
docker run --read-only --tmpfs /run --tmpfs /tmp my-app
Note that I added tmpfs mounts. Most apps still need to write temporary files or PID files. Using tmpfs keeps these in RAM, which is faster and keeps the disk clean—critical for maintaining those NVMe I/O speeds we cherish.
4. The Infrastructure Layer: Why Virtualization Still Matters
This is where many "cloud-native" zealots get it wrong. They think containers replace virtualization. They don't. Containers are process isolation; Virtual Machines (VMs) are hardware isolation.
If you run Docker on a shared-kernel VPS technology (like OpenVZ or LXC), you are relying entirely on software namespaces to protect you from the "noisy neighbor" or the malicious tenant next door. If the kernel crashes, everyone on that physical node goes down.
Comparison: Container Hosting Architectures
| Feature | Shared Kernel (OpenVZ/LXC) | Hardware Virtualization (KVM) |
|---|---|---|
| Kernel Isolation | Shared (Weak) | Dedicated (Strong) |
| Docker Support | Often limited/Hackish | Native/Full |
| Security | Vulnerable to host kernel panic | Isolated OS environment |
This is why at CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine) for our instances. When you spin up a Docker host with us, you get your own kernel. This is vital for GDPR compliance. You need to be able to tell Datatilsynet that your customer data is strictly isolated, not floating in a shared memory space with fifty other companies.
5. Network Segmentation and Local Latency
Don't just bind ports to 0.0.0.0. If you are running a database container that only your app container needs to talk to, use a user-defined bridge network. Don't expose the database port to the public internet.
# Create a private network
docker network create backend-net
# Run the DB strictly on this network
docker run -d --name db --network backend-net mongo:4.0
# Run the app on the same network
docker run -d --name app --network backend-net -p 80:80 my-app
Furthermore, consider where your packets are physically traveling. If your primary user base is in Norway or Northern Europe, hosting in a US data center is asking for latency issues. Physics is stubborn; light only travels so fast.
CoolVDS infrastructure is optimized for the Nordic region. We peer directly at NIX (Norwegian Internet Exchange). When your API responds in 15ms versus 150ms, your users notice. Your SEO notices. And frankly, your developers notice when their SSH sessions don't lag.
Conclusion: Discipline over Hype
2019 will bring new orchestration tools. Kubernetes is becoming the de-facto standard for large clusters, but for many of us, a well-hardened Docker engine on a robust Linux VPS is the sweet spot of price and performance.
Security isn't a product you buy; it's a process you adhere to. Run as non-root. Drop capabilities. Use read-only filesystems. And ensure your underlying infrastructure uses real hardware virtualization.
Ready to lock down your stack? Don't risk your data on oversold, shared-kernel hosting. Deploy a KVM-based CoolVDS instance in 55 seconds and get the dedicated resources your containers deserve.