Console Login

Container Security in 2018: Locking Down Docker on Norwegian Infrastructure

Container Security in 2018: Locking Down Docker on Norwegian Infrastructure

Let’s be honest. docker run -d -p 80:80 nginx is how 90% of developers start. It’s also how they end up getting owned. In the rush to adopt microservices, we often forget that containers, by default, are not sandboxes. They are processes with a fancy view of the filesystem.

It is November 2018. The GDPR hammer dropped in May. Datatilsynet (The Norwegian Data Protection Authority) is not joking around with data leakage. If you are running containers as root on a public-facing VPS without security contexts, you aren't just risking downtime; you are risking legal negligence.

I have spent the last week auditing a client's Kubernetes setup (running v1.11). It was a mess. Privileged containers everywhere, mapped directly to host devices. One kernel panic in a container took down the whole node. We fixed it, but it was a close call. Here is how we lock things down, from the image build to the bare metal.

1. The Root Problem (Literally)

By default, the user inside the container is root. If an attacker breaks out of the container (via a kernel exploit like Dirty COW, which we saw in 2016 and still see variants of), they are root on your host. Game over.

You must create a specific user in your Dockerfile. Do not rely on the runtime to do this for you.

The Wrong Way

FROM node:8
COPY . /app
CMD ["npm", "start"]

The Right Way (Alpine 3.8 Example)

FROM alpine:3.8

# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Tell Docker to use this user
USER appuser

WORKDIR /home/appuser
COPY . .

CMD ["./my-app"]
Pro Tip: If you are using CoolVDS NVMe instances, file permissions can be tricky when mounting volumes. Ensure the UID/GID on the host matches the container user, or use user namespaces (userns-remap) in your Docker daemon config.

2. Drop Linux Capabilities

Linux capabilities break down the root privilege into distinct units. Does your Nginx container need to load kernel modules? No. Does it need to change the system time? Absolutely not.

Yet, by default, Docker grants a wide array of capabilities. You should adopt a "deny all, allow some" approach. We drop everything and add back only what is strictly necessary (like NET_BIND_SERVICE if you bind to port 80, though you should be using a reverse proxy anyway).

Command line enforcement:

docker run --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --read-only \
  --tmpfs /run \
  --tmpfs /tmp \
  my-secure-image

The --read-only flag is a lifesaver. It mounts the container's root filesystem as read-only. If an attacker manages to inject a script, they cannot write it to disk. We use --tmpfs for the few directories that actually need writing.

3. Kernel Isolation: The Infrastructure Layer

This is where the "Cloud" abstraction leaks. Containers share the host kernel. If you are on a budget provider using OpenVZ or LXC to host your Docker containers, you are nesting containers inside containers. This is performance suicide and a security nightmare.

You need a clean hypervisor boundary. This is why we deploy strictly on KVM (Kernel-based Virtual Machine). At CoolVDS, every instance is a KVM slice. You get your own kernel. If your neighbor on the physical rack executes a fork bomb, your Docker daemon doesn't even blink.

For high-performance databases like MySQL 5.7 or MariaDB 10.3 running in containers, the I/O overhead of double-encapsulation (Docker inside OpenVZ) is unacceptable. Our NVMe storage connects directly via VirtIO drivers, giving you near-metal speeds.

4. Network Segmentation and Firewalls

Don't publish ports blindly. If you run -p 8080:8080, Docker modifies iptables to open that port to the world (0.0.0.0), often bypassing your UFW (Uncomplicated Firewall) rules if you aren't careful.

Bind to localhost if you are using a reverse proxy on the host:

docker run -p 127.0.0.1:8080:8080 my-app

Then, configure Nginx on the host to handle the traffic. This gives you a central point for SSL termination (Let's Encrypt) and logging, satisfying those GDPR audit logs required by Norwegian regulations.

Nginx Reverse Proxy Configuration (Snippet)

server {
    listen 80;
    server_name app.coolvds-client.no;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # Security Headers
        add_header X-Frame-Options "SAMEORIGIN";
        add_header X-XSS-Protection "1; mode=block";
    }
}

5. Auditing Your Setup

You think you are secure? Prove it. In 2018, the standard for checking Docker configurations is the Docker Bench for Security. It's a script that checks for dozens of common best practices based on the CIS benchmark.

git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security
sh docker-bench-security.sh

If you see red warnings, fix them. If you see warnings about "Host Configuration" or "Kernel Version", it might be your hosting provider letting you down. You need a modern kernel (4.x series) to support all security features effectively.

The Norwegian Context: Latency and Law

Why does hosting location matter for security? Availability is a pillar of the CIA triad (Confidentiality, Integrity, Availability). Routing traffic through half of Europe introduces hops, latency, and failure points.

By hosting in Norway, you reduce latency to the NIX (Norwegian Internet Exchange) in Oslo to single-digit milliseconds. Furthermore, keeping data within Norwegian borders simplifies GDPR compliance significantly compared to routing data through US-owned data centers where the CLOUD Act applies.

Feature Standard VPS CoolVDS KVM Instance
Kernel Isolation Shared (LXC/OpenVZ) Dedicated (KVM)
Storage I/O SATA/SAS (Rotational) NVMe (Low Latency)
Docker Compatibility Limited/Restricted Native/Full Control
Location Often Central Europe Oslo, Norway

Security is not a product; it is a process. It requires vigilance, correct configuration, and the right infrastructure. Do not let a shared kernel be your weak link.

Need a sandbox that actually behaves like a dedicated server? Spin up a CoolVDS KVM instance today. We give you the root access and isolation you need to run Docker safely.