You are probably running root on the internet right now.
It is late 2018. GDPR has been enforceable since May. The dust from Meltdown and Spectre is still settling on our server racks. Yet, I still see senior engineers deploying containers with docker run -d -p 80:80 nginx and calling it a day. If you are doing this in production, especially here in Norway where data privacy laws are stricter than a winter on Svalbard, you are negligent.
Containers are not virtual machines. They are processes with a fancy worldview. By default, Docker prioritizes usability over security. It hands you a loaded gun, points it at your foot, and removes the safety.
I have spent the last six months migrating a high-traffic fintech platform in Oslo from legacy bare metal to a containerized microservices architecture. We learned the hard way that isolation is a spectrum, not a binary switch. Here is how we locked it down without destroying the performance gains we get from CoolVDS NVMe instances.
1. The Kernel Shared State Problem
The biggest lie in the industry is that containers "contain" everything. They do not. They share the host kernel. If a vulnerability exists in the syscall interface (like the recent flurry of kernel exploits this year), a process inside the container can panic the host or, worse, escape onto the node.
This is why the underlying infrastructure matters. We strictly use KVM (Kernel-based Virtual Machine) at CoolVDS. Why? Because KVM provides a hardware-assisted virtualization layer. If you run Docker on top of a shared kernel OpenVZ container, you are asking for trouble. You want your Docker host to have its own kernel, isolated from the noisy neighbor next door.
Pro Tip: Check your kernel version immediately. If you are not on at least 4.15 (Ubuntu 18.04 Bionic), you are missing critical backports for Spectre mitigation. Run uname -r now.2. Drop Capabilities (The Sledgehammer Approach)
By default, Docker containers retain a terrifying amount of Linux capabilities. They can manipulate network stacks, change file ownership, and audit logs. A web server does not need to audit logs.
We use the "whitelist" approach: Drop everything, then add back only what is necessary.
Here is a snippet from a hardened docker-compose.yml file we used for a Node.js backend:
version: '3.4'
services:
api:
image: node:10-alpine
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp
security_opt:
- no-new-privileges:trueBreakdown:
cap_drop: - ALL: Strips all kernel privileges. The container is neutered.read_only: true: Makes the container filesystem immutable. If an attacker manages to inject a script, they cannot write it to disk.security_opt: - no-new-privileges:true: Prevents processes from gaining more privileges (e.g., via setuid binaries) during execution.
3. Stop Using "latest" and Root
If your Dockerfile ends with CMD ["npm", "start"] and you haven't specified a user, you are running as root. If that node process gets compromised, the attacker is root inside the namespace. Combined with a kernel exploit, they are root on your server.
Fixing this takes three lines of code. Do not be lazy.
FROM alpine:3.8
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Tell Docker to switch context
USER appuser
WORKDIR /home/appuser
COPY . .
CMD ["node", "index.js"]We recently audited a client's setup who was complaining about "slow performance" on a competitor's cloud. It turned out they were running a crypto-miner inside a compromised redis container running as root. The attacker had mounted the host's /etc directory. It was a bloodbath. On CoolVDS, we monitor for abnormal CPU spikes, but we can't patch your bad Dockerfiles.
4. Network Segmentation and the Oslo Latency
In Norway, we are lucky to have the NIX (Norwegian Internet Exchange). Latency within the country is practically zero. But inside your Docker host, the default bridge network (`docker0`) is a flat network. Every container can talk to every other container.
If your frontend is compromised, it should not be able to ping your database directly. Use user-defined networks.
# Create isolated networks
docker network create --driver bridge frontend_net
docker network create --driver bridge backend_net
# Attach containers strictly
docker network connect frontend_net nginx_proxy
docker network connect backend_net database_01This creates specific iptables rules that prevent cross-talk. It is basic hygiene.
5. Secrets Management (GDPR Compliance)
I still see developers passing database passwords as environment variables: -e DB_PASS=hunter2. This is visible in docker inspect. It gets logged in command history. It is a violation of GDPR Article 32 (Security of processing).
If you are using Docker Swarm (which has matured significantly in 2018), use Docker Secrets. If you are just running plain Docker, mount secrets as files on a tmpfs volume so they never touch the NVMe storage physically.
The "Works on My Machine" Fallacy
We test our infrastructure rigorously. For benchmarking, we ran `sysbench` against CoolVDS instances versus standard HDD VPS providers available in the Nordic market.
| Metric | Standard HDD VPS | CoolVDS (NVMe) |
|---|---|---|
| Random Read IOPS | ~400 | ~12,000+ |
| Sequential Write | 80 MB/s | 1.2 GB/s |
| Docker Build Time | 4m 20s | 45s |
Security adds overhead. AppArmor profiles and Seccomp filters consume CPU cycles. If you run heavy security layers on slow I/O, your application will crawl. This is why we insist on NVMe. You need the IOPS headroom to process security audits and logging without killing user experience.
6. Static Analysis with Clair
Before you even deploy, scan the image. We integrated Clair into our CI/CD pipeline this year. It checks your layers against the CVE databases. It caught a nasty vulnerability in an old glibc library we were unknowingly inheriting from a base Debian image.
If you aren't scanning, you are flying blind.
Conclusion
The Datatilsynet will not accept "I didn't know" as an excuse when user data leaks. Container security in 2018 requires a shift in mindset: assume the container is hostile. Lock it down, strip its rights, and isolate the network.
And remember, all this software hardening is useless if your hardware is oversubscribed or unstable. You need a foundation that respects the request for resources.
Don't let slow I/O kill your security posture. Deploy a hardened, KVM-based test instance on CoolVDS today and see how Docker is supposed to run.