Console Login

Escaping the Jail: Hardening LXC and OpenVZ Environments in Production

The "Container" Trap: Why Your Jail Isn't As Secure As You Think

Let’s be honest with each other. The hosting industry is currently obsessed with density. Everyone is talking about this new "Docker" project (currently v0.6), and legacy providers are still pushing OpenVZ overselling schemes to cram as many customers as possible onto a single physical chassis. It’s great for their margins, but it’s a potential nightmare for your security posture.

I’ve spent the last week auditing a client's infrastructure—a major e-commerce setup here in Oslo—and what I found was terrifying. They were running critical payment gateways inside unpatched LXC (Linux Containers) with default privileges. In a shared kernel environment, if one container panics the kernel, everyone goes down. Worse, if an attacker breaks out of the container (root escalation), they own the host node.

If you are serious about hosting in Norway, dealing with strict Datatilsynet regulations and the Personal Data Act (Personopplysningsloven), you cannot rely on "default" container configurations. Today, we are going to look at how to actually lock down these environments, and why sometimes, you just need true hardware virtualization.

The Shared Kernel Problem

In a containerized environment like LXC or OpenVZ, you are not running a separate OS. You are running a userspace on top of the host's kernel. To demonstrate this, run a simple `uname` check inside your VPS.

# Inside a container uname -r 2.6.32-042stab078.27

If you see a kernel version ending in `stab` (OpenVZ) or matching the host exactly without the ability to load modules, you are in a container. This means you share the kernel's memory space, CPU scheduler, and network stack vulnerabilities with every other tenant on that box. If a neighbor gets DDOSed, your I/O wait spikes. If there is a kernel exploit (like the recent CVE-2013-2094), isolation is meaningless.

Pro Tip: This is why CoolVDS defaults to KVM (Kernel-based Virtual Machine). With KVM, you get your own kernel. If your neighbor crashes their OS, your instance keeps humming along. For mission-critical workloads, the overhead of KVM is a small price to pay for sleep.

Hardening LXC: Dropping Capabilities

If you must use containers—perhaps for their raw speed and lack of emulation overhead—you absolutely must drop capabilities. By default, many container templates give `root` inside the container too much power. We need to strip these down in your LXC config file (usually located at `/var/lib/lxc/container-name/config`).

You never want a container to be able to load kernel modules or manipulate the MAC address. Here is a battle-tested configuration snippet I use for production web nodes:

# /var/lib/lxc/web01/config

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.ipv4 = 192.168.1.10/24

# DROPPING CAPABILITIES (CRITICAL)
# sys_module: prevents loading kernel modules
# mac_admin: prevents changing MAC settings
# sys_time: prevents changing system time
lxc.cap.drop = sys_module mac_admin sys_time sys_boot audit_control

# Cgroup limits to prevent DoS from inside
lxc.cgroup.memory.limit_in_bytes = 2048M
lxc.cgroup.memory.memsw.limit_in_bytes = 2048M
lxc.cgroup.cpu.shares = 512

By setting `lxc.cap.drop`, effectively neuter the root user inside the container. Even if an attacker compromises the web service and escalates to root, they cannot insert a malicious kernel module to hide their tracks or intercept traffic from other containers.

Filesystem Isolation and Read-Only Mounts

Another common mistake is mounting the entire `/proc` and `/sys` filesystems as read-write. This exposes kernel tunable parameters to the container. In 2013, we are seeing more sophisticated scripts that probe `/sys` to identify host vulnerabilities.

Ensure your fstab for the container forces read-only access where appropriate. This is often handled by the LXC templates in Ubuntu 12.04 LTS, but if you are rolling custom configs on CentOS 6, verify this manually:

# Cat the fstab inside the container root cat /var/lib/lxc/web01/rootfs/etc/fstab proc /proc proc nodev,noexec,nosuid 0 0 sysfs /sys sysfs defaults,ro 0 0

Note the `ro` flag on sysfs. This is non-negotiable.

Network Segregation with iptables

Containers often bridge to the main network interface, which exposes them directly to the public LAN. In a high-security setup, especially one adhering to European data privacy standards, you should place containers behind a NAT and only forward specific ports.

Do not rely on the hosting provider's firewall alone. Implement strict `iptables` rules on the host node (the hypervisor) to control traffic flow to the containers.

# On the Host Node (Dom0)
# 1. Flush existing rules
iptables -F

# 2. Default Drop Policy
iptables -P INPUT DROP
iptables -P FORWARD DROP

# 3. Allow established connections
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT

# 4. Forward ONLY port 80 and 443 to the container
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.1.10:80
iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination 192.168.1.10:443

# 5. Allow the container to talk to the world (outbound only)
iptables -t nat -A POSTROUTING -s 192.168.1.10/24 -o eth0 -j MASQUERADE

# 6. Explicitly Allow Forwarding for the Web Service
iptables -A FORWARD -p tcp -d 192.168.1.10 --dport 80 -j ACCEPT
iptables -A FORWARD -p tcp -d 192.168.1.10 --dport 443 -j ACCEPT

This configuration creates a strict perimeter. Even if the container has an open port 22 (SSH) with a weak password, the outside world cannot reach it because we only forwarded ports 80 and 443.

The Trade-off: Performance vs. Isolation

Why do people still use OpenVZ or LXC given these security headaches? Performance. The lack of hardware emulation means your syscalls are native. For a startup in Oslo trying to serve static assets with minimal latency to NIX (Norwegian Internet Exchange), that raw speed is tempting.

However, compare the technologies:

FeatureOpenVZ / LXC (Containers)KVM (CoolVDS Standard)
KernelShared (Risk of panic affecting all)Dedicated (Full isolation)
PerformanceNear Native95-98% of Native (negligible diff on modern CPUs)
SecurityLow (Process isolation only)High (Hardware virtualization)
OS ChoiceLinux OnlyLinux, BSD, Windows
SELinux SupportLimited/DifficultFull Support

The Verdict

If you are running a dev sandbox, LXC is fantastic. But if you are handling customer data, credit cards, or sensitive information subject to the Norwegian Personal Data Act, you need a blast radius of zero. You need physical isolation logic.

At CoolVDS, we made the architectural decision to build our infrastructure entirely on KVM with high-performance SSD storage. We don't oversell, and we don't force you to share a kernel with a noisy neighbor running a Bitcoin miner. We believe that true privacy—critical for European businesses—starts at the hypervisor level.

Stop gambling with shared kernels. Get a dedicated environment that behaves exactly like a physical server.

Ready to lock it down?

Deploy a fully isolated, KVM-based Linux instance in Oslo today. Experience low latency, true root access, and the peace of mind that comes from knowing your kernel is yours alone.

Launch your KVM Instance on CoolVDS (Starting at 55 seconds deployment time) >