The Castle Is Dead: Why Your VPN Concentrator Is a Liability
If the last 14 months have taught us anything, it is that the traditional "Castle and Moat" network architecture is obsolete. We spent decades building firewalls around our offices, only to send everyone home in 2020. Now, you have developers SSH-ing into production from insecure home Wi-Fi networks, and your VPN concentrator is a single point of failure (SPOF) choking on latency.
I recently audited a setup for a logistics firm in Oslo. They were routing all traffic—even SaaS requests—back through a physical appliance in Fornebu just to apply security policies. The latency was destroying their agility, and the cost of the MPLS circuits was bleeding them dry. The solution wasn't a bigger firewall; it was Zero Trust.
Zero Trust isn't a product you buy from a vendor slide deck. It is an architectural mindset: Never trust, always verify. Every request, whether it comes from the open internet or the server next to it in the rack, must be authenticated, authorized, and encrypted.
In this guide, I will show you how to build the pillars of a Zero Trust architecture using tools available in May 2021: WireGuard for transport, Nginx for mTLS, and SSH Certificate Authorities for access. We will deploy this on CoolVDS infrastructure to ensure we have the raw I/O and KVM isolation necessary to handle the encryption overhead without latency penalties.
1. The Transport Layer: Replacing IPsec with WireGuard
Old-school VPNs (IPsec/IKEv2) are bloated, slow to handshake, and notoriously difficult to configure correctly. In the Linux 5.6 kernel released last year, WireGuard became mainline. It is lean (under 4,000 lines of code), cryptographically opinionated (Noise protocol framework), and offers roaming capability by design.
Instead of a hub-and-spoke model where a central VPN server bottlenecks everything, we use WireGuard to create a mesh. Each server talks directly to the other peers it needs to.
Here is a production-ready WireGuard configuration for a CoolVDS instance acting as a secure gateway. Note the MTU adjustments; on virtualized networks, avoiding fragmentation is critical for performance.
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
ListenPort = 51820
PrivateKey = <Server_Private_Key>
# Optimize MTU for VPS tunneling overhead
MTU = 1360
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
[Peer]
# Remote Developer Laptop
PublicKey = <Client_Public_Key>
AllowedIPs = 10.100.0.2/32
Enable the interface immediately:
systemctl enable --now wg-quick@wg0
Why run this on CoolVDS? Because WireGuard lives in the kernel space. Many container-based hosting providers (LXC/OpenVZ) share the kernel with the host, making it impossible or dangerous to load custom kernel modules. CoolVDS provides true KVM virtualization, giving you your own kernel and the freedom to use modern networking stacks without asking for permission.
2. Identity-Aware Proxying with Nginx and mTLS
In a Zero Trust model, IP allow-listing is insufficient. IPs change. Identity does not. We need to move authentication to the application layer. Mutual TLS (mTLS) ensures that the server validates the client's certificate before serving a single byte of application data.
This offloads authentication from your backend application. Your app doesn't need to know how to authenticate a user, it just needs to know that Nginx has already done it. This is highly effective for protecting internal dashboards (Kibana, Prometheus, Adminers).
First, generate your CA and sign a client key:
openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes
Then, configure Nginx to require a valid certificate signed by your internal CA. This configuration also enforces HTTP/2 for multiplexing, which reduces the TCP handshake overhead significantly.
# /etc/nginx/conf.d/secure_gateway.conf
server {
listen 443 ssl http2;
server_name internal.your-org.no;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# mTLS Configuration
ssl_client_certificate /etc/nginx/ssl/internal_ca.crt;
ssl_verify_client on;
location / {
proxy_pass http://localhost:8080;
proxy_set_header X-Real-IP $remote_addr;
# Pass the common name (identity) to the backend
proxy_set_header X-Client-DN $ssl_client_s_dn;
}
}
Pro Tip: On CoolVDS instances equipped with NVMe storage, the I/O wait during SSL handshakes is negligible. However, ensure you set `ssl_session_cache shared:SSL:10m;` to avoid repeated handshakes killing your CPU during traffic spikes.
3. Killing Static SSH Keys: The Certificate Authority Approach
Managing static `authorized_keys` files across 50 servers is a nightmare. It creates "key sprawl." When an engineer leaves the company, you have to scrub every server. If you are still doing this in 2021, stop.
OpenSSH has supported Certificate Authorities for years. You sign a user's public key with a validity period (TTL). When the TTL expires, access is revoked automatically. No cleanup required.
Step 1: Generate a CA Key (Keep this offline/secure!)
ssh-keygen -t ed25519 -f ssh_ca -C "ca@your-org.no"
Step 2: Sign a User's Key (Valid for 8 hours only)
ssh-keygen -s ssh_ca -I user_ident -n root,dev -V +8h user_key.pub
Step 3: Configure the Server (The CoolVDS Instance)
Edit your sshd config to trust the CA key.
# /etc/ssh/sshd_config
# Trust the CA public key
TrustedUserCAKeys /etc/ssh/ssh_ca.pub
# Revocation list (optional but recommended)
RevokedKeys /etc/ssh/revoked_keys
# strictly disallow password auth
PasswordAuthentication no
ChallengeResponseAuthentication no
Reload the daemon:
systemctl reload sshd
Now, your team accesses servers using ephemeral credentials. This satisfies strict compliance requirements often found in GDPR audits by Datatilsynet, as you have a clear audit trail of certificate issuance.