The Perimeter is Dead: Implementing Zero-Trust in 2019
Stop pretending your LAN is safe. The moment you assume traffic inside your private network is trusted, you have already lost. In the traditional "castle and moat" strategy, we spent years building massive firewalls at the edge, only to let anyone who slipped past the gate roam freely between the database and the app server. It’s a disaster waiting to happen.
I recently audited a setup for a mid-sized Norwegian fintech company. They had a fortress of a perimeter firewall, but their internal database port 3306 was open to the entire subnet. One compromised web shell on a frontend server, and the attacker had unrestricted access to the customer SQL database. They didn't even need a password exploit; the internal trust policy allowed the connection.
This is why we need Zero-Trust. Google’s BeyondCorp paper started this conversation years ago, but in 2019, it is no longer optional—especially here in Europe where GDPR (and Datatilsynet) will hammer you for negligence.
1. The Foundation: Isolation is Non-Negotiable
Zero-trust starts at the metal. If you are running on shared hosting or container-based virtualization where the kernel is shared (like older OpenVZ setups), you are vulnerable to kernel panic exploits or neighbor-side-channel attacks. You cannot build a secure house on a swamp.
Pro Tip: Always insist on KVM (Kernel-based Virtual Machine) virtualization. This provides hardware-assisted isolation. At CoolVDS, we enforce KVM by default because we refuse to gamble with shared kernels. If one tenant crashes, your instance stays up. True isolation is the first step of Zero-Trust.
2. Hardening SSH: Keys are Not Enough
Passphrases are dead. If you are still allowing password authentication via SSH, fix it immediately. But for Zero-Trust, we go further. We don't just want encryption; we want to limit who can even attempt the handshake.
Edit your /etc/ssh/sshd_config. We are moving to Ed25519 keys—they are smaller, faster, and more secure than RSA-2048.
# /etc/ssh/sshd_config
# Disconnect idle sessions to prevent hijacking
ClientAliveInterval 300
ClientAliveCountMax 0
# Ban root login utterly
PermitRootLogin no
# Disable password auth
PasswordAuthentication no
ChallengeResponseAuthentication no
# Whitelist specific users
AllowUsers deploy_admin
# Restrict algorithms (Legacy removal)
KexAlgorithms curve25519-sha256@libssh.org
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com
MACs hmac-sha2-512-etm@openssh.com
After applying this, reload the daemon. If you lock yourself out, you better hope your provider has a working VNC console (CoolVDS offers an out-of-band HTML5 console for exactly this nightmare scenario).
3. Mutual TLS (mTLS): Authenticating Services, Not Just Users
Here is the core of Zero-Trust: Services must authenticate each other. Your web server shouldn't talk to your backend API just because it's on the same IP range. It should talk because it holds a cryptographic certificate signed by your internal CA.
We can implement this using Nginx (v1.14+). This ensures that even if an attacker gets inside your network, they cannot query your API without the client certificate.
Step A: Generate the Certificates
# Create a private CA key
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt
# Create the Server Key and CSR
openssl genrsa -out server.key 4096
openssl req -new -key server.key -out server.csr
# Sign the Server CSR with the CA
openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
# Create the Client Key (for the connecting service)
openssl genrsa -out client.key 4096
openssl req -new -key client.key -out client.csr
openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt
Step B: Configure Nginx to Verify Clients
In your Nginx config, we enable ssl_verify_client. This drops any connection that doesn't present a valid certificate signed by your CA.
server {
listen 443 ssl http2;
server_name api.internal.coolvds.net;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
# The Magic of Zero Trust
ssl_client_certificate /etc/nginx/certs/ca.crt;
ssl_verify_client on;
location / {
proxy_pass http://localhost:8080;
# Pass SSL info to backend for auditing
proxy_set_header X-Client-DN $ssl_client_s_dn;
}
}
4. Network Micro-Segmentation with iptables
Security groups are nice, but OS-level firewalls are better. You want a default-drop policy. In a Zero-Trust model, we explicitly whitelist flows. If you are running a database, only the specific private IP of the web server should be allowed. Everything else is dropped.
# Flush existing rules
iptables -F
# Set default chain policies to DROP
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
# Allow loopback
iptables -A INPUT -i lo -j ACCEPT
# Allow established connections (keep your SSH alive!)
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Allow SSH from specific Management IP only
iptables -A INPUT -p tcp -s 89.xxx.xxx.xxx --dport 22 -j ACCEPT
# Allow Web Traffic
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
# Log dropped packets (Crucial for auditing)
iptables -A INPUT -m limit --limit 5/min -j LOG --log-prefix "IPTables-Dropped: "
Install iptables-persistent to save these rules. Without this, a reboot wipes your security posture.
5. The Hardware Reality
Software controls add overhead. Encrypting every packet inside the datacenter (mTLS) and inspecting every header consumes CPU cycles. This is where the underlying hardware matters.
| Feature | Standard HDD VPS | CoolVDS NVMe |
|---|---|---|
| I/O Wait | High (Bottlenecks logging) | Near Zero (NVMe handles audit logs instantly) |
| Encryption Speed | Variable (Noisy neighbors) | Consistent (Dedicated CPU cycles) |
| Latency (Oslo) | 15-30ms | <2ms (Local peering) |
When you enable full logging for GDPR compliance and rigorous packet inspection, standard HDDs choke. I’ve seen audit logs saturate disk I/O, causing the application to hang. On CoolVDS, the NVMe arrays eat these write operations for breakfast, ensuring your security doesn't kill your performance.
Conclusion: Trust Code, Not Networks
The days of assuming the internal network is safe are over. Between GDPR requirements and the rising sophistication of botnets in 2019, you must verify every packet.
Start small. Migrate your most critical internal API to use mTLS. Harden your SSH configs today. And ensure your infrastructure is built on KVM with fast storage to handle the encryption overhead.
Ready to lock it down? Deploy a high-performance KVM instance on CoolVDS in Oslo. You bring the iptables rules; we bring the stability.