Zero-Trust Architecture in 2023: Killing the VPN and Hardening Linux Infrastructure
Stop trusting your local network. It is a lie. If you are still relying on a VPN to grant broad access to your internal subnets, you aren't securing your infrastructure—you are just creating a brittle candy shell around a soft, gooey center. In the current threat landscape, where supply chain attacks like the SolarWinds aftermath are still fresh in our collective memory and ransomware gangs target unpatched internal services, the only viable strategy is paranoia by design. We call this Zero-Trust.
I have spent the last decade cleaning up after "secure" VPCs were compromised because a single developer's laptop got hit. The attacker pivoted laterally because port 22 was open to the whole subnet. In 2023, implicit trust is a vulnerability. This guide details how to dismantle the perimeter and enforce identity-based access control at the packet level, focusing on high-performance implementation on KVM-based infrastructure.
The Norwegian Context: Data Sovereignty is Not Optional
For those of us operating out of Oslo or serving EU clients, the legal landscape is as treacherous as the technical one. Since the Schrems II ruling, relying on US-controlled cloud overlays for your security layer is a compliance minefield. Datatilsynet (The Norwegian Data Protection Authority) has been clear: you need to control the encryption keys and the data residency.
This is why "outsourcing" Zero-Trust to a SaaS provider often fails the sovereignty test. Building it yourself on bare-metal or high-performance KVM VPS instances (like those we provision at CoolVDS) ensures that the encryption termination points happen on soil you legally inhabit.
Step 1: Death to Static SSH Keys
Static SSH keys are hard to manage and impossible to expire effectively. If an employee leaves, do you rotate every key on every server? Unlikely. The 2023 standard is SSH Certificates. We use a Certificate Authority (CA) to sign short-lived keys. If a key is stolen, it expires in 60 minutes anyway.
Here is how you configure a Linux host to trust a CA. This requires no external internet access to validate, maintaining robustness even during network segmentation tests.
Configuring the Host (The Server)
First, place your CA's public key on the server. Then, edit /etc/ssh/sshd_config to trust it.
# /etc/ssh/sshd_config
# Explicitly disable password auth and static keys if you are brave
PasswordAuthentication no
PubkeyAuthentication yes
# The Critical Line
TrustedUserCAKeys /etc/ssh/user_ca.pub
# Optional: Enforce Principals (Roles)
AuthorizedPrincipalsFile /etc/ssh/auth_principals/%u
Signing a User Key (The Client)
Instead of copying a public key to the server, the user sends their public key to your secure signing server (Vault or a secure offline machine). You sign it with an expiration:
ssh-keygen -s /path/to/ca_key -I key_id -n root,admin -V +1h user_key.pub
Pro Tip: On CoolVDS instances, we recommend utilizing the high-speed NVMe I/O to log every single SSH certificate login attempt to a local immutable journal before shipping it to your SIEM. Auditability is half the battle.
Step 2: mTLS for Service-to-Service Communication
If your database listens on a private IP, it should not accept connections just because the source IP looks friendly. IP spoofing is trivial within compromised L2 segments. Mutual TLS (mTLS) ensures that both the client and the server present valid certificates.
While Service Meshes like Istio handle this in Kubernetes, doing it manually on Nginx for standalone VPS clusters is far more lightweight and performant. This setup creates a cryptographic identity for your applications.
server {
listen 443 ssl http2;
server_name internal-api.coolvds.local;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
# Enforce Client Certificates
ssl_client_certificate /etc/nginx/certs/internal_ca.crt;
ssl_verify_client on;
location / {
proxy_pass http://localhost:8080;
# Pass the CN so the backend knows who called
proxy_set_header X-Client-DN $ssl_client_s_dn;
}
}
Step 3: WireGuard as the Overlay Network
IPsec is bloated. OpenVPN is slow. In late 2023, WireGuard is the undisputed king of encrypted overlays. It lives in the Linux kernel (5.6+), meaning context switching is minimal. This is crucial for latency-sensitive applications.
We use WireGuard to create a mesh where every node can talk to every other node securely, regardless of the underlying physical network. Unlike traditional VPNs, WireGuard uses cryptokey routing—packets are only accepted if they come from a peer with a known public key.
| Feature | OpenVPN | WireGuard |
|---|---|---|
| Codebase Size | 100,000+ lines | ~4,000 lines (Auditable) |
| Handshake | Slow (TLS) | Instant (Noise Protocol) |
| Architecture | Userspace | Kernel Space (High Performance) |
Here is a standard configuration for a peer node. Note the simplicity:
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.2/24
PrivateKey =
ListenPort = 51820
[Peer]
# The Gateway / Hub Peer
PublicKey =
Endpoint = 185.xxx.xxx.xxx:51820
AllowedIPs = 10.100.0.0/24
PersistentKeepalive = 25
This setup works flawlessly on CoolVDS because we provide KVM virtualization. Many budget providers use LXC or OpenVZ, which often share the kernel and lack the necessary modules to run WireGuard natively. If you want kernel-level security performance, you need a proper hypervisor.
Step 4: Micro-Segmentation with nftables
iptables is being phased out in favor of nftables. It provides a more unified and faster packet classification framework. In a Zero-Trust environment, the default policy is DROP. You explicitly allow only what is necessary.
A simple, hardened nftables ruleset for a web node:
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
# Allow localhost
iifname "lo" accept
# Allow established/related connections
ct state established,related accept
# Allow SSH (ideally only from WireGuard interface wg0)
iifname "wg0" tcp dport 22 accept
# Public Web Traffic
tcp dport { 80, 443 } accept
# ICMP is necessary for MTU path discovery, limit rate
ip protocol icmp icmp type echo-request limit rate 1/second accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
Infrastructure Matters: The CoolVDS Reality
Implementing this stack requires more than just software; it requires a host that respects the boundaries of your system. Zero-Trust relies on entropy, encryption speed, and network stability.
When you run mTLS and WireGuard simultaneously, you are encrypting packets twice (once at the application layer, once at the transport layer). This taxes the CPU. On shared platforms where "vCPUs" are heavily oversold, your latency spikes will be unpredictable. CoolVDS instances are tuned for high-performance computing. We don't steal cycles. When your nginx worker needs to perform an RSA handshake, the CPU is there instantly. For businesses in Norway, the combination of low-latency local connectivity and compliant, robust hardware makes this the reference architecture for 2023.
Zero-Trust is not a product you buy; it is a discipline you practice. Start by rotating your keys, closing your ports, and migrating to infrastructure that supports the tools you need.
Ready to harden your perimeter? Deploy a KVM instance on CoolVDS today and get full kernel control for your WireGuard mesh.