The Perimeter is a Lie: Implementing True Zero-Trust Architecture on Linux VPS
Stop me if you’ve heard this one before: "It's okay, that port is only open to the internal LAN." or "We don't need encryption there, it's behind the VPN."
These are the famous last words of a System Administrator about to spend their weekend mitigating a ransomware attack. In May 2020, with teams distributed across Europe and endpoints connecting from god-knows-where, the concept of a "trusted network" is obsolete. The perimeter isn't just porous; it's a hallucination.
I’ve cleaned up enough messes to know that reliance on a single firewall boundary is negligence. If an attacker compromises one developer's laptop, they shouldn't have free reign over your entire database cluster just because they hold a valid VPN ticket. This is where Zero Trust comes in. It’s not a product you buy; it’s a mindset: Never trust, always verify.
The "Trusted Network" Fallacy
I recently audited a setup for a client in Oslo using a traditional "Castle and Moat" topology. They had a rock-solid perimeter firewall, but their internal VLANs were a free-for-all. A compromised Jenkins plugin allowed an attacker to execute a reverse shell. From there, they scanned the local /24 subnet, found an unprotected Redis instance (because "it's internal, right?"), and exfiltrated customer data. The firewall never blinked.
Zero Trust dictates that we treat the internal network with the same hostility as the public internet. Every packet, every request, and every user must be authenticated and authorized, regardless of origin.
Step 1: The Death of iptables, The Rise of nftables
With the release of Ubuntu 20.04 LTS last month, it is time to stop clinging to legacy `iptables` scripts. `nftables` provides a unified, programmable interface for packet filtering that is far more performant for the kind of micro-segmentation Zero Trust requires.
We need to lock down traffic between nodes. If Web-Node-A talks to DB-Node-B, it should only happen on port 3306, and strictly between those IPs. Everything else drops.
#!/usr/sbin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
# Allow established/related connections
ct state established,related accept
# Loopback interface
iifname "lo" accept
# SSH (Hardened, non-standard port recommended)
tcp dport 2222 accept
# ICMP is necessary for MTU path discovery, don't block it blindly
ip protocol icmp accept
# Specific rule: Web to DB (Micro-segmentation)
ip saddr 10.10.5.20 tcp dport 3306 accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
This default-drop policy ensures that lateral movement is strictly contained. On CoolVDS NVMe instances, the kernel processing overhead for `nftables` is negligible, ensuring that security doesn't eat into your I/O performance.
Step 2: WireGuard – The Encryption Layer
In a Zero Trust model, you cannot trust the physical wire. If you are routing traffic between a VPS in Oslo and a backup server in Bergen, plain HTTP or unencrypted TCP is reckless.
Historically, setting up IPsec was a nightmare of XML config files and obscure errors. OpenVPN is slow and runs in user-space. Enter WireGuard. It was merged into the Linux 5.6 kernel recently (March 2020), making it a first-class citizen. It is fast, modern, and perfectly suited for creating an encrypted mesh network between your servers.
Here is how you set up a secure tunnel between two nodes on Ubuntu 20.04:
# Install WireGuard
sudo apt update && sudo apt install wireguard
# Generate keys
wg genkey | tee privatekey | wg pubkey > publickey
Now, configure `/etc/wireguard/wg0.conf` on your database server to only accept traffic from the web server's WireGuard IP:
[Interface]
PrivateKey =
Address = 10.0.0.1/24
ListenPort = 51820
[Peer]
PublicKey =
AllowedIPs = 10.0.0.2/32
Pro Tip: Unlike OpenVPN, WireGuard is connection-less. It fails silently. If your keys don't match, you won't get an error log; the packets just vanish into the void. Always triple-check your Base64 keys.
Step 3: Identity-Aware Proxies and mTLS
Network segmentation is layer 3/4. We also need layer 7 verification. Just because a packet arrives at Nginx doesn't mean it should be processed. Mutual TLS (mTLS) ensures that the client (your web app) presents a valid certificate to the server (your API).
In Nginx, this looks like this:
server {
listen 443 ssl;
server_name api.coolvds-internal.no;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
# Enable mTLS
ssl_client_certificate /etc/nginx/certs/ca.crt;
ssl_verify_client on;
location / {
proxy_pass http://backend;
}
}
With `ssl_verify_client on;`, Nginx will reject any connection that doesn't present a certificate signed by your internal CA. This renders port scanning and brute-force attacks on your API endpoints useless.
The Hardware Trust Anchor
Software configuration is only half the battle. The underlying infrastructure is the other. In the virtualization world, there is a massive difference between Containers (LXC, OpenVZ) and Virtual Machines (KVM).
Container-based VPS hosting shares the host kernel. If there is a kernel vulnerability (and there are always kernel vulnerabilities), a container breakout can compromise the entire node. For a true Zero-Trust model, you need hard isolation.
This is why at CoolVDS, we strictly use KVM (Kernel-based Virtual Machine) for our instances. Each VPS runs its own isolated kernel. Even if a neighbor on the physical host gets compromised, your memory space and CPU instructions remain isolated. Combined with our Norway-based data centers, this aids compliance with GDPR by ensuring data sovereignty—a critical factor given the current uncertainty regarding US-EU data transfers.
Implementing the Change
Migrating to Zero Trust isn't an overnight switch. Start small:
- Audit your ports: Use `nmap` from inside your network to see what your servers are actually exposing to each other.
- Deploy WireGuard: Start by encrypting the link between your database and your application server.
- Hard Isolation: Move critical workloads off shared-kernel containers and onto dedicated-kernel KVM instances like those we offer.
Security is a process, not a destination. But in 2020, relying on the perimeter is a process that leads to disaster.
Ready to harden your infrastructure? Deploy a KVM-based, low-latency instance on CoolVDS today and start building on a foundation you can verify.