Console Login

Kubernetes Networking in 2021: Surviving the Packet Jungle and Schrems II

Kubernetes Networking in 2021: Surviving the Packet Jungle and Schrems II

Let’s be honest: 90% of the time, it’s DNS. The other 10% is usually a misconfigured CNI plugin or a firewall rule you forgot about three months ago. Kubernetes networking is a beast. It abstracts away so much complexity that when something breaks, it breaks spectacularly. I’ve spent more nights than I care to admit staring at tcpdump output, trying to figure out why Service A can't talk to Service B even though the YAML looks perfect.

With the release of Kubernetes 1.20 recently, and the sheer panic over the Docker shim deprecation, we have enough on our plates. But if you ignore your networking stack, you are building on quicksand. In this deep dive, we are cutting through the marketing fluff. We’re talking about real performance, CNI choices, and why the recent Schrems II ruling makes your choice of hosting provider in Norway more critical than ever.

The CNI Battlefield: Flannel, Calico, or Cilium?

In 2021, sticking with the default networking provider is often a mistake. I recently audited a cluster for a fintech client in Oslo where latency was spiking unpredictably. They were using a basic VXLAN overlay on a provider with noisy neighbors. The CPU overhead for encapsulation was killing their throughput.

Here is the reality of your choices right now:

  • Flannel: Simple. Great for homelabs. Don't run it in production if you care about security or advanced policies.
  • Calico: The industry standard. It uses BGP (Border Gateway Protocol) to route packets without encapsulation if your network supports it. It’s battle-tested.
  • Cilium: The new hotness using eBPF. Since version 1.9 dropped late last year, the performance gains by bypassing iptables are hard to ignore.

Pro Tip: If you are deploying on bare metal or high-performance VPS like CoolVDS, try running Calico in pure Layer 3 mode. The performance difference compared to IPIP encapsulation is noticeable, especially for high-throughput applications.

Deploying Calico with a Custom Pod CIDR

Don't just blindly apply the manifest. Ensure your Pod CIDR doesn't overlap with your host network.

# Download the manifest
curl https://docs.projectcalico.org/v3.17/manifests/calico.yaml -O

# Edit the CALICO_IPV4POOL_CIDR variable to match your cluster init
# - name: CALICO_IPV4POOL_CIDR
#   value: "10.244.0.0/16"

kubectl apply -f calico.yaml

Optimizing NGINX Ingress for High Load

Most people install the NGINX Ingress Controller and walk away. Then they wonder why they get 502s under load. The defaults are conservative. If you are running a high-traffic e-commerce site targeting the Nordic market, you need to tune the kernel and the controller.

We need to modify the ConfigMap to handle higher concurrency. Specifically, we want to tweak keepalives and buffer sizes.

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
data:
  keep-alive: "75"
  keep-alive-requests: "1000"
  worker-processes: "auto"
  max-worker-open-files: "65535"
  upstream-keepalive-connections: "100"
  compute-full-forwarded-for: "true"

Without upstream-keepalive-connections, NGINX opens a new connection to your backend pods for every single request. That is a massive waste of TCP handshakes and adds latency. On a platform like CoolVDS, where we give you NVMe storage and dedicated CPU cycles, you want the software to be as fast as the hardware.

The "Schrems II" Elephant in the Room

Since the CJEU struck down the Privacy Shield last July (Schrems II), moving personal data to US-controlled clouds (AWS, GCP, Azure) has become a legal minefield for Norwegian companies. The Datatilsynet (Norwegian Data Protection Authority) is watching.

If you are architecting a Kubernetes cluster for a European client today, data sovereignty isn't a feature; it's a requirement. This is where the infrastructure layer matters.

Latency & Legal Compliance:
Running your K8s nodes on CoolVDS in our Oslo datacenter solves two problems:

  1. Compliance: Your data stays in Norway/EEA, under strict privacy laws, shielded from the US CLOUD Act.
  2. Speed: If your users are in Scandinavia, why route packets through Frankfurt or Dublin? Direct peering at NIX (Norwegian Internet Exchange) drops latency to single-digit milliseconds.

Kernel Tuning: The Forgotten Layer

Kubernetes nodes are just Linux servers. If your sysctl settings are default, your fancy container orchestration is being throttled by the kernel. I've seen connection timeouts occur simply because the node ran out of ephemeral ports.

Here is a snippet of a sysctl tuning init container I use for high-performance nodes. This increases the port range and enables faster recycling of TIME_WAIT sockets.

securityContext:
  privileged: true
command:
- /bin/sh
- -c
- |
  sysctl -w net.core.somaxconn=65535
  sysctl -w net.ipv4.ip_local_port_range="1024 65535"
  sysctl -w net.ipv4.tcp_tw_reuse=1
  sysctl -w fs.file-max=2097152

Security: Network Policies are Mandatory

By default, Kubernetes allows all pods to talk to all other pods. In a multi-tenant environment, this is terrifying. If one pod gets compromised, the attacker can scan your entire internal network.

Adopting a "Zero Trust" model starts with a default-deny policy. You force developers to explicitly whitelist traffic.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Once you apply this, everything stops working. Good. Now, open only what you need. For example, allowing the frontend to talk to the backend on port 8080:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

Infrastructure Matters

You can have the best YAML in the world, but if your underlying VPS has high I/O wait or "noisy neighbor" CPU stealing, your Kubernetes networking performance will suffer. Network packet processing is CPU intensive.

At CoolVDS, we don’t oversubscribe our cores. When you spin up a node, you get the raw power you need to push packets at line speed. Combined with local Norwegian hosting, you are looking at the sweet spot of low latency, GDPR compliance, and raw throughput.

Don't let network latency kill your application's reputation. Verify your conntrack tables, enforce your Network Policies, and host where your users are.

Ready to test your cluster's true potential? Deploy a high-performance KVM instance on CoolVDS today and see the difference dedicated resources make.