Console Login
Home / Blog / Security & Compliance / Container Security in 2015: Stop Handing Root Access to Your Host
Security & Compliance 0 views

Container Security in 2015: Stop Handing Root Access to Your Host

@

You Are Probably Deploying Vulnerabilities

Let’s be honest. The hype around Docker right now is deafening. Everyone from small startups in Grünerløkka to enterprise giants is rushing to "containerize" everything. And I get it. The ability to package dependencies and ship consistent environments from dev to prod is powerful.

But there is a dirty secret in our industry right now: Most Docker deployments are wildly insecure.

I recently audited a setup for a client in Oslo. They were running a customer-facing dashboard in a container. It looked fine on the surface. But a quick check of the Dockerfile revealed they were running the application as root. Even worse, they hadn't limited kernel capabilities. If an attacker managed to exploit a vulnerability in their application code, they wouldn't just be inside the container—they’d have a straight shot at the host kernel.

In this post, we are going to fix that. We are going to look at how to lock down Docker 1.8, why the underlying architecture of your VPS matters more than you think, and how to keep Datatilsynet happy.

The "Root" of the Problem

By default, the Docker daemon runs as root. If you don't specify a user inside your container, the process inside runs as root too. In a container, "root" is effectively the same user ID (0) as the host machine. While Linux namespaces provide isolation, it is not a perfect sandbox yet. A kernel panic inside a container brings down the whole ship.

1. Create a Dedicated User

This is the lowest hanging fruit, yet 90% of the Dockerfiles I see ignore it. Stop running your node.js or Python apps as root. It takes two lines to fix this.

FROM ubuntu:14.04 RUN groupadd -r coolapp && useradd -r -g coolapp coolapp USER coolapp CMD ["./start.sh"]

By switching to a standard user, you massively reduce the blast radius if your application is compromised.

Kernel Capabilities: The Principle of Least Privilege

Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities. By default, Docker drops many of these, but it still leaves too many enabled for a web application.

Does your Nginx server really need SYS_CHROOT or MKNOD? Probably not.

The best practice right now is to drop all capabilities and only add back the specific ones you need. Here is how I start my containers:

docker run --d -p 80:80 \ --cap-drop=ALL \ --cap-add=NET_BIND_SERVICE \ --name web_server \ nginx:1.9

This command strips the container of all privileges, then adds back NET_BIND_SERVICE just so it can bind to port 80. If an attacker breaks in, they will find themselves in a straitjacket.

Infrastructure Matters: The Case for KVM

This is where your choice of hosting provider becomes a security decision.

Many budget VPS providers in Europe still use OpenVZ. In an OpenVZ environment, you are sharing the host kernel directly with other customers on the same physical server. You cannot load your own kernel modules, and you rely entirely on the host's kernel version. Docker on OpenVZ is often a hacky, unstable experience requiring old kernel versions (like 2.6.32) which are missing modern cgroup features.

This is a security risk you cannot afford.

Pro Tip: Always ask your provider about their virtualization technology. If it's not hardware-virtualized (like KVM, Xen, or VMware), walk away.

At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine) for our VPS Norway instances. This means your VPS has its own isolated kernel. If you run Docker on CoolVDS, you have two layers of defense:

  1. The Container isolation (Namespaces/Cgroups)
  2. The Hypervisor isolation (KVM)

This architecture is critical for compliance with the Norwegian Personal Data Act (Personopplysningsloven). You need to be able to prove to auditors that your data is segregated.

Data Sovereignty and Latency

Speaking of compliance, let's talk about where your bits actually live. With the Safe Harbor framework looking increasingly shaky (lots of chatter in legal circles right now regarding the Schrems case), keeping data within the EEA is smart. Keeping it in Norway is even better.

Hosting your container cluster on servers physically located in Oslo offers two advantages:

  • Legal Safety: Your data falls under Norwegian jurisdiction, which has some of the strictest privacy protections in the world.
  • Performance: Latency matters. Pinging a server in Frankfurt from Oslo takes ~25-30ms. Pinging a CoolVDS server in Oslo takes ~2ms. For database-heavy applications, that round-trip time accumulates.

Immutable Infrastructure

One final technique we are using more in 2015 is the concept of Read-Only Containers. If your application is stateless (which it should be), why give it write access to the filesystem?

docker run --read-only -v /tmp --name stateless_app my_image

This forces the container's root filesystem to be read-only. An attacker cannot modify binaries, download script kiddie tools, or edit config files. It forces you to be disciplined with your data persistence, utilizing Volumes for the data that actually matters.

Conclusion

Containers are the future, but don't let the convenience blind you to the risks. In 2015, the tooling is still young, and security is often an afterthought in the documentation.

Secure your users, drop your capabilities, and ensure your foundation is solid. Running secure containers on a shared-kernel VPS is like putting a steel door on a tent. You need the hardware isolation that KVM provides.

Ready to lock down your infrastructure? Deploy a KVM-based instance on CoolVDS today and get single-digit latency to your Norwegian customers.

/// TAGS

/// RELATED POSTS

The Perimeter is Dead: Implementing Zero-Trust Security on Your VPS After the Safe Harbor Collapse

With the EU-US Safe Harbor agreement invalidated today, the 'castle and moat' security strategy is o...

Read More →

Automating Server Hardening: Compliance Strategies for Norwegian CTOs (2015 Edition)

With the Safe Harbor framework crumbling and Datatilsynet watching, manual security is a liability. ...

Read More →

Automating Compliance: Why Manual Hardening is Killing Your Audit Strategy

With the Safe Harbor framework crumbling, relying on manual server hardening is a liability. Learn h...

Read More →

Server Hardening & Compliance: Automating Security for the Norwegian Cloud

Stop managing security with spreadsheets. We explore automating CentOS 7 hardening using Ansible to ...

Read More →

The Perimeter is Dead: Implementing Zero-Trust Security in 2015

The 'castle and moat' security strategy is failing. We explore how to implement Google's BeyondCorp-...

Read More →

Automating Security Baselines: Why Manual Hardening is a Liability in 2015

Manual server hardening is a critical risk. Learn how to automate security baselines using Ansible o...

Read More →
← Back to All Posts