Console Login

Container Security in 2021: Hardening Docker and Kubernetes for Production

Container Security in 2021: Hardening Docker and Kubernetes for Production

Let's get one thing straight immediately: Containers are not Virtual Machines.

If you treat a Docker container like a lightweight VM, you are going to get burned. I recently audited a setup for a client in Oslo where a developer had mounted the host's Docker socket into a CI container to "make builds faster." It took exactly 14 seconds for a red-team script to break out, gain root access to the host node, and start scraping memory from other containers. Efficiency is great, but shared kernel architecture means a single slip-up exposes everything.

With the dust still settling from the SolarWinds supply chain attack late last year and the recent Baron Samedit sudo vulnerability (CVE-2021-3156) terrifying sysadmins just last month, it is time to stop playing fast and loose. Here is how we lock down container infrastructure in the Nordic market, where privacy isn't just a preference—it's a legal minefield.

1. The Root Problem (Literally)

By default, the process inside a Docker container runs as root. If that process escapes the container, it is root on your host. In 2021, running containers as root is negligence.

We need to enforce the Principle of Least Privilege. You should be stripping capabilities and forcing non-root users. If your application "needs" root, fix the application.

The Fix: User Directives and Capability Dropping

In your `Dockerfile`, stop letting the daemon decide the user. Create a specific user/group.

FROM alpine:3.13

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Tell Docker to switch to this user
USER appuser

# The rest of your commands run as non-root
WORKDIR /app
COPY . .
CMD ["./main"]

But changing the user isn't enough. The Linux kernel grants specific capabilities (like `NET_ADMIN` or `SYS_CHROOT`) to processes. Docker gives you a default set that is arguably too generous. You should drop all capabilities and only add back what is strictly necessary.

Here is how you run a hardened container interactively:

docker run --rm -it \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --read-only \
  --tmpfs /tmp \
  --security-opt=no-new-privileges \
  nginx:alpine
Pro Tip: The --read-only flag is your best friend. It mounts the container's root filesystem as read-only. If an attacker manages to exploit a vulnerability in your web app, they can't write a backdoor script to the disk. They are stuck in memory, and a restart wipes them out.

2. Supply Chain Security: Trust Nothing

Pulling `node:latest` is a game of Russian Roulette. You don't know what changed in the base image between yesterday and today. With the Docker Hub rate limits introduced recently, reliance on public images is also an operational risk.

In 2021, vulnerability scanning is not optional. Tools like Trivy have become standard in our CI/CD pipelines. It is fast, stateless, and integrates easily.

Scanning Before Deploying

Before you push an image to your private registry (or to a production server), scan it. Do not deploy critical vulnerabilities.

# Install Trivy (v0.16.0 is current stable)
$ sudo apt-get install wget apt-transport-https gnupg lsb-release
$ wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
$ echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
$ sudo apt-get update
$ sudo apt-get install trivy

# Scan your image
$ trivy image python:3.4-alpine

If you see High or Critical vulnerabilities, the build fails. Simple.

3. Infrastructure Isolation: The Host Matters

You can harden your Kubernetes pod manifests all day, but if the underlying Virtual Private Server (VPS) is on a noisy, oversold node with weak hypervisor isolation, you are building a castle on a swamp.

This is where the choice of hosting provider becomes a security decision. Many budget providers use container-based virtualization (like OpenVZ/LXC) for their VPS offerings. This means you are running containers inside a container, sharing the kernel with every other customer on that physical box. This is a security nightmare for isolation.

At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine). Each VPS gets its own dedicated kernel. This provides a hard boundary between your container workloads and our physical infrastructure. Even if an attacker escapes your Docker container, they hit the KVM hypervisor wall, not the host OS.

Latency and Compliance in Norway

For those of us operating in Europe, Schrems II has changed the landscape. You need to know exactly where your data lives. Sending traffic through a load balancer in Frankfurt when your users are in Bergen adds unnecessary latency and potential compliance headaches.

Feature Generic Cloud VPS CoolVDS Norway
Virtualization Often OpenVZ (Shared Kernel) KVM (Dedicated Kernel)
Data Residency Vague "EU Region" Oslo, Norway (GDPR Compliant)
Storage SATA SSD / HDD Pure NVMe (High IOPS)
Ping to NIX 20-40ms < 3ms

4. Network Defense: Don't Talk to Strangers

By default, Docker containers on the same bridge network can talk to each other freely. In a microservices architecture, your frontend container should talk to the API, but it has no business talking to the database backup worker.

In Kubernetes, we use NetworkPolicies. In pure Docker, we use user-defined bridge networks to isolate tiers.

# Create isolated networks
docker network create frontend-net
docker network create backend-net

# Run the DB only on backend
docker run -d --name db --network backend-net postgres:13

# Run the API on both (to bridge the gap)
docker run -d --name api --network frontend-net --network backend-net my-api

# Run the web server only on frontend
docker run -d --name web --network frontend-net -p 80:80 nginx

Now, if the web server is compromised, the attacker cannot scan the database port directly. They are walled off.

Conclusion

Security isn't a product you buy; it's a process you execute. It requires vigilance, minimizing attack surfaces, and choosing infrastructure that respects your data sovereignty.

Don't let slow I/O kill your SEO or weak isolation kill your business. You need a foundation that supports modern security standards with the raw power of NVMe storage and robust KVM virtualization.

Ready to harden your stack? Deploy a secure, high-performance KVM instance on CoolVDS in Oslo. Get your test environment live in under 55 seconds.