Console Login

Beyond the Bridge: High-Performance Container Networking with Open vSwitch and LXC

Beyond the Bridge: High-Performance Container Networking with Open vSwitch and LXC

If I see one more iptables NAT script holding together a production environment like duct tape on a cracking dam, I might just quit this industry to farm sheep in Lofoten. We are in 2013. The days of manual port mapping for every single service should be behind us, yet I still see "Senior" System Administrators manually configuring Linux bridges (`br0`) for their LXC containers and wondering why network latency spikes when traffic hits 100Mbps.

Here is the hard truth: Linux Containers (LXC) are brilliant for density, but the default networking stack is primitive. If you are scaling a web cluster or a multi-tenant environment, standard Linux bridging is a bottleneck. It is dumb pipe plumbing when you actually need a managed switch.

Today, we are ripping out the legacy bridge and implementing Open vSwitch (OVS). We will look at how to create isolated virtual networks that can actually handle the I/O throughput of modern SSDs, and why you simply cannot do this on budget OpenVZ hosting.

The Problem: Spaghetti Routing and Latency

When you spin up an LXC container on a standard host, it usually sits behind a NAT. The host kernel acts as a router. Every packet has to traverse the host's connection tracking table (`nf_conntrack`). In high-traffic scenarios—like a Magento sale or a media streaming node—the CPU spends an absurd amount of cycles just context-switching to figure out where packets go.

I recently audited a setup for a client in Oslo. They were complaining about "slow database queries." It wasn't the database. It was their virtualization layer choking on packet interrupts because they were running 50 containers on a single `brctl` bridge without VLAN tagging. Their softirq usage was through the roof.

The Solution: Open vSwitch (OVS)

Open vSwitch brings the power of enterprise hardware switches into the Linux kernel. It supports VLANs, QoS (Quality of Service), and GRE tunneling out of the box. This allows us to create complex network topologies without touching a single physical cable.

Prerequisites

You need a kernel that supports the OVS module. This is where 90% of you will fail. If you are renting a cheap VPS from a provider that uses OpenVZ, stop reading now. You share the kernel with 500 other noisy neighbors. You cannot load custom kernel modules. You cannot modify the network stack deeply.

To follow this guide, you need a KVM (Kernel-based Virtual Machine) environment. KVM gives you a dedicated kernel. At CoolVDS, we use KVM exclusively for this reason. If you want to engineer a network, you need to own the kernel.

Step 1: Installing the Tools

Assuming you are on a KVM instance running Debian Wheezy or Ubuntu 12.04 LTS:

# Update your lists
apt-get update

# Install OVS and LXC
apt-get install openvswitch-switch lxc bridge-utils

Verify the module is loaded:

lsmod | grep openvswitch

If that returns nothing, modprobe openvswitch. If that errors out, call your hosting provider and ask why they are restricting your modules.

Architecture: The Virtual Switch

Instead of a dumb bridge, we create a virtual switch `ovs-br0`. This will act as the master interface for our containers.

# Create the switch
ovs-vsctl add-br ovs-br0

# Add your physical interface (eth0) to the switch
# WARNING: This will drop your SSH connection if you don't script it properly.
# It's best to do this via console or a startup script.

ovs-vsctl add-port ovs-br0 eth0

Now, we need to configure the LXC container to plug into this switch instead of the default bridge. Edit your container config, typically located at /var/lib/lxc/container_name/config.

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = ovs-br0
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.network.ipv4 = 192.168.1.50/24
lxc.network.script.up = /etc/lxc/ovs-up
lxc.network.script.down = /etc/lxc/ovs-down

Note the scripts. Standard LXC scripts expect a linux bridge. We need custom hooks to register the veth pair with OVS.

The OVS Hook Script

Create /etc/lxc/ovs-up:

#!/bin/bash
switch='ovs-br0'
if [ -n "$5" ]; then
    # $5 is the veth interface name on the host
    ovs-vsctl add-port ${switch} $5
fi

Make it executable: chmod +x /etc/lxc/ovs-up. This automatically plugs the container into our high-performance switch upon boot.

QoS: Protecting Your Bandwidth

Here is where OVS destroys standard bridging. Let's say you have a noisy log-aggregator container. You don't want it consuming all your bandwidth to NIX (Norwegian Internet Exchange). You can rate-limit that specific port instantly.

# Limit ingress to 10Mbps (10000kbps)
ovs-vsctl set interface veth_container_id ingress_policing_rate=10000
ovs-vsctl set interface veth_container_id ingress_policing_burst=1000
Pro Tip: Always set a burst buffer. TCP needs a little headroom to scale the window size efficiently. A hard cap without burst results in retransmissions and degraded throughput. We see this often with clients migrating from shared hosting environments where "unlimited bandwidth" is a marketing lie.

Data Privacy & The "Cloud" Trap

We are seeing more US-based "cloud" providers pushing their APIs into Europe. While convenient, you need to be aware of the EU Data Protection Directive (95/46/EC). If you are handling Norwegian user data, you are accountable to Datatilsynet.

When you use proprietary public cloud networking, you often lose visibility on exactly where your packets are routing. By building your own OVS topology on top of a dedicated KVM VPS in Oslo, you ensure that traffic stays local. You maintain the audit trail. You control the routing table.

Storage I/O: The Silent Network Killer

Networking doesn't live in a vacuum. If your disk I/O is blocked, your network buffers fill up, and packets get dropped. This is the "Bufferbloat" phenomenon.

In 2013, running databases on spinning rust (HDD) is negligence. At CoolVDS, we utilize pure SSD storage arrays. When you combine the low latency of SSDs with the efficient packet switching of OVS, the difference is night and day.

Metric Standard Linux Bridge (brctl) Open vSwitch (OVS)
CPU Overhead High (Interrupt Heavy) Low (Kernel Optimized)
VLAN Support Manual Configuration Native / Dynamic
Traffic Shaping Difficult (tc/iptables) Built-in QoS Policies
Isolation Layer 3 (IP) Layer 2 (MAC/VLAN)

The Verdict

Containerization is the future of infrastructure, but only if you respect the network layer. Tools like Chef and Puppet can automate the deployment of your containers, but they cannot fix a broken network architecture.

If you are serious about performance, stop relying on the default settings. Switch to KVM, install Open vSwitch, and take control of your packets.

Ready to build a real network? Deploy a high-performance KVM instance with CoolVDS today. We give you the root access and kernel modules you need to architect solutions that actually scale.