Console Login

Container Security in 2017: Locking Down Docker Production Environments in Norway

Container Security in 2017: Locking Down Docker Production Environments in Norway

Let’s be honest for a second. The phrase "it works on my machine" is the most dangerous sentence in IT. In 2017, everyone is rushing to containerize their monoliths because it feels modern, but few are stopping to ask: Is this actually secure?

I’ve seen too many startups in Oslo deploying Docker containers running as root directly on bare metal, thinking they have achieved isolation. They haven't. They have just created a convenient package for an attacker to escalate privileges and own the entire host. With the GDPR enforcement date looming next year, this kind of negligence isn't just bad practice; it is a liability that could bankrupt you.

If you are deploying containers today, you are responsible for the kernel they share. Here is how to lock them down without destroying your developers' workflow.

1. The Root Delusion

By default, a process inside a Docker container runs as root. If that process breaks out of the container (and with the recent Dirty COW exploit, we know kernel vulnerabilities are real), the attacker is root on your host server. Game over.

The fix is simple but often ignored. Create a specific user in your Dockerfile. Stop letting your web server own your system.

# Don't do this
FROM node:6

# Do this instead
RUN groupadd -r app && useradd -r -g app app
USER app

Pro Tip: If you need to bind to port 80, use a reverse proxy like Nginx on the host or inside a separate container, or use setcap to allow low-port binding for non-root users. Never run your application logic as root just to bind a port.

2. Drop Linux Capabilities

The Linux kernel breaks down root privileges into distinct units called capabilities. A web server doesn't need to load kernel modules or change the system time. Yet, by default, Docker gives containers a wide array of these powers.

We follow a "deny all, permit some" strategy. Drop everything, then add back only what is strictly necessary. This dramatically reduces the attack surface.

docker run --d -p 80:80 \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --cap-add=SETUID \
  --name web_server \
  my-image:latest

3. The Hypervisor Wall: Why KVM Matters

This is where infrastructure choice becomes a security feature. Containers share the host kernel. If the kernel panics or gets exploited, every container on that host goes down or gets compromised.

This is why we at CoolVDS strictly use KVM (Kernel-based Virtual Machine) virtualization. Unlike OpenVZ or LXC-based VPS hosting often found in the budget market, KVM provides hardware-level virtualization.

If you run your Docker swarm inside a CoolVDS KVM instance, you have a hard boundary. Even if a container attacker manages to crash the kernel, they only crash your VM, not the physical node, and certainly not the neighbor's data. In the context of Norwegian data privacy laws, this isolation layer is critical.

4. Read-Only Filesystems

Mutable infrastructure is a relic of the past. Your containers should be stateless. If an attacker compromises your application, the first thing they want to do is download a payload or modify a configuration file. Make their life miserable by mounting the root filesystem as read-only.

docker run --read-only -v /run/app/temp:/tmp:rw my-app

This forces you to be explicit about where data is written. It’s annoying at first, but it prevents an entire class of persistence attacks.

5. Network Segmentation and Local Latency

Don't use the default bridge network for everything. If one container gets breached, you don't want it sniffing traffic from your database container. Create specific networks for specific communication paths.

Furthermore, where does your data physically live? Latency matters, but sovereignty matters more. Datatilsynet (The Norwegian Data Protection Authority) is becoming increasingly strict about where personal data is processed.

Feature Shared Hosting CoolVDS KVM
Kernel Isolation Shared Dedicated
Network Stack Shared IP often Private Stack
Data Location Unknown Cloud Oslo, Norway

6. Resource Limits (The DDoS Defense)

A single memory leak in one container shouldn't kill your entire node. In 2017, we still see people deploying without Cgroups limits. Even if you aren't under malicious attack, a bad recursive function can look exactly like a DoS attack to your CPU.

Set hard limits in your compose files:

version: '2'
services:
  web:
    image: nginx
    mem_limit: 512m
    cpus: 0.5

On CoolVDS NVMe instances, we guarantee the I/O throughput, but you must guarantee your application behaves within its memory boundaries. We provide the highway; you have to drive the car responsibly.

Conclusion

Security isn't a product you buy; it's a process you adhere to. By dropping capabilities, enforcing user limits, and running on top of true KVM virtualization, you turn your infrastructure from a soft target into a fortress.

Don't wait for a breach to take isolation seriously. Deploy your hardened Docker stack on a CoolVDS instance today. We offer low-latency connectivity directly from Oslo, ensuring your data stays compliant and your applications stay fast.