Console Login

Container Security Post-Log4j: Hardening Docker for the Norwegian Enterprise

Container Security Post-Log4j: Hardening Docker for the Norwegian Enterprise

If the events of this December—specifically the Log4j (Log4Shell) disaster—have taught us anything, it is that encapsulation is not isolation. Many DevOps teams in Oslo and Bergen spent their holidays patching Java applications running inside containers, falsely believing that the container boundary would somehow mitigate remote code execution. It didn't. If an attacker executes code inside your container, and that container is running as root on a shared kernel with default capabilities, they aren't just in the application; they are knocking on the host's door.

The comfortable lie we tell ourselves is that docker run is a security feature. It is not. It is a process isolation mechanism that, by default, favors convenience over security. In a production environment, specifically under the scrutiny of Datatilsynet (The Norwegian Data Protection Authority) and post-Schrems II regulations, running "vanilla" Docker configurations is negligence. We need to go deeper than just scanning images; we need to reduce the attack surface at the kernel level. This is not about buying expensive security suites. It is about configuring the primitives that Linux gave us years ago.

1. The Root Problem: User Namespaces

The most persistent sin in container deployment is running processes as UID 0 (root). By default, the root user inside a container is the same as the root user on the host machine. If a container breakout occurs (via a kernel exploit), the attacker immediately has root privileges on your CoolVDS node. While we isolate your VPS with KVM virtualization at the hardware level, you must isolate the process at the OS level.

You have two options: strictly define a user in the Dockerfile or map user namespaces globally. The former is mandatory for any CI/CD pipeline.

# WRONG
FROM node:14-alpine
COPY . /app
CMD ["node", "index.js"]

# RIGHT
FROM node:14-alpine
WORKDIR /app
COPY . /app
# Create a specific group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Change ownership
RUN chown -R appuser:appgroup /app
USER appuser
CMD ["node", "index.js"]

However, for a systemic defense, you should enable user namespace remapping in the Docker daemon. This maps the container's root user to a high-numbered, unprivileged user on the host system. Even if they break out, they end up as nobody on the host.

Edit /etc/docker/daemon.json on your host:

{
  "userns-remap": "default",
  "no-new-privileges": true,
  "live-restore": true,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
Pro Tip: The no-new-privileges flag is critical. It prevents processes inside the container from gaining new privileges using setuid or setgid binaries. This neutralizes a vast class of privilege escalation attacks.

2. Dropping Kernel Capabilities

By default, Docker grants a container a significant subset of Linux kernel capabilities, including NET_RAW (allowing packet crafting) and MKNOD (device node creation). For a web server or a microservice, you rarely need these. The principle of least privilege demands we drop everything and add back only what is strictly necessary.

If you are running an Nginx ingress on a CoolVDS NVMe instance, you likely only need to bind to a port. You do not need to audit the system or modify kernel modules.

docker run --d --name secure-nginx \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --cap-add=CHOWN \
  --cap-add=SETGID \
  --cap-add=SETUID \
  --read-only \
  --tmpfs /var/cache/nginx \
  --tmpfs /var/run \
  nginx:alpine

The --read-only flag is the unsung hero here. It mounts the container's root filesystem as read-only. If an attacker exploits a vulnerability in your application to drop a reverse shell script or a crypto-miner, the write operation fails immediately. You map writable areas (like /var/cache) using tmpfs, which resides in RAM and is wiped on restart.

3. Network Segmentation and Local Latency

In Norway, where internet connectivity is robust but latency to central Europe can still impact high-frequency trading or real-time gaming backends, we often see developers defaulting to --network host to squeeze out performance. This removes the network isolation entirely. The container shares the host's networking namespace.

Instead, use user-defined bridges. This allows you to restrict inter-container communication. A database container should never be reachable from the public internet, nor should it be accessible by the frontend container unless explicitly allowed.

version: "3.8"
services:
  frontend:
    image: my-app:latest
    networks:
      - front-tier
  database:
    image: postgres:13-alpine
    networks:
      - back-tier

networks:
  front-tier:
    internal: false
  back-tier:
    internal: true  # No internet access for the DB

This configuration ensures that your database cannot initiate outbound connections to download malware, even if compromised.

4. Supply Chain Security in 2021

The Log4j incident highlighted that the vulnerability often lies deep within the dependency tree. You cannot rely on "official" images being vulnerability-free. In 2021, scanning your images before deployment is not optional.

We recommend Trivy for its simplicity and CI integration. It scans OS packages (Alpine, RHEL, CentOS) and language-specific dependencies (Bundler, Composer, npm, yarn).

$ trivy image --severity HIGH,CRITICAL python:3.4-alpine

2021-12-31T10:00:00.000+0100    INFO    Detecting Alpine vulnerabilities...

python:3.4-alpine (alpine 3.9.4)
================================
Total: 1 (HIGH: 0, CRITICAL: 1)

+---------+------------------+----------+-------------------+---------------+--------------------------------+
| LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION |             TITLE              |
+---------+------------------+----------+-------------------+---------------+--------------------------------+
| openssl | CVE-2019-1543    | CRITICAL | 1.1.1b-r1         | 1.1.1c-r0     | openssl: ChaCha20-Poly1305     |
|         |                  |          |                   |               | with long nonces               |
+---------+------------------+----------+-------------------+---------------+--------------------------------+

If you see a Critical CVE in your base image, the build must fail. Do not deploy.

5. The Infrastructure Layer: KVM vs. Shared Kernel

This is where the choice of hosting provider moves from a budget decision to an architectural one. Many cheap VPS providers use OpenVZ or LXC. In those environments, you are effectively running containers inside a container. You share the kernel with every other customer on that physical node. If a kernel exploit is found, your data is at risk from a "neighbor" attack.

At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine). Each VPS gets its own dedicated kernel. If you run Docker inside a CoolVDS instance, you have two layers of defense: the container runtime and the hardware virtualization boundary. This is crucial for compliance with GDPR and local standards set by Datatilsynet. Even if an attacker escapes your Docker container, they are trapped inside your specific VM, not roaming the physical host.

Performance Check: I/O Wait

Security scanners and hardened filesystems can increase I/O load. Monitoring `iowait` is essential. On a CoolVDS NVMe instance, you should see negligible latency even during heavy scans.

MetricHDD / SATA SSD VPSCoolVDS NVMe
Random Read IOPS500 - 5,00050,000+
Container Startup Time2-5 secondsMilliseconds
Trivy Scan Duration45 seconds5 seconds

Security should not degrade performance. If your security tooling causes your application to time out, your infrastructure is too slow.

Conclusion

The era of trusting the default configuration ended years ago, but 2021 buried it. Hardening containers requires a shift in mindset: assume the perimeter is already breached. Drop capabilities, isolate networks, run as non-root, and critically, run your workloads on infrastructure that respects isolation boundaries.

Don't let a misconfiguration be the reason you spend your weekend restoring backups. Secure the kernel, secure the container, and build on a foundation that handles the load.

Ready to lock down your infrastructure? Deploy a KVM-based, NVMe-powered instance on CoolVDS in Oslo today and build your fortress.