Console Login

Kill Your VPN: Implementing Zero-Trust Security on Linux Infrastructure (2019 Edition)

Kill Your VPN: Implementing Zero-Trust Security on Linux Infrastructure

Let’s be honest: the "castle and moat" security model is a relic. If you are still relying on a single perimeter firewall or a clunky OpenVPN server to protect your backend infrastructure, you are one phished credential away from a data breach notification letter to the Datatilsynet (Norwegian Data Protection Authority).

I’ve spent the last decade cleaning up after breaches where a developer’s laptop was compromised, and because the VPN gave them flat access to the "trusted" network, the attacker had free rein over the production database. The concept of a "trusted internal network" is a lie. In 2019, inside the data center is just as dangerous as the public internet.

This is where Zero Trust comes in. It’s not a product you buy from a vendor; it’s an architectural mindset: Never Trust, Always Verify. Every request, packet, and SSH connection must be authenticated and authorized, regardless of its origin. Here is how we build this architecture on Linux, specifically tailored for the high-compliance environment we face here in Norway and Europe.

The Three Pillars of Linux Zero Trust

Forget marketing buzzwords. For a System Administrator or DevOps engineer, Zero Trust boils down to three technical implementations:

  1. Micro-segmentation: Host-level firewalls (no flat networks).
  2. Mutual TLS (mTLS): Service-to-service identity verification.
  3. Ephemeral Access: SSH Certificate Authorities (no static keys).

1. Micro-segmentation with IPTables

Many hosting providers drop you into a shared VLAN where you can see your neighbors' ARP traffic. That is unacceptable. At CoolVDS, we isolate instances at the KVM layer, but you must still secure the OS. We don't rely on cloud security groups alone; we use iptables (or the newer nftables framework) to ensure the server protects itself.

A Zero Trust policy defaults to DROP. Explicitly allow only what is needed.

# Flush existing rules
iptables -F

# Set default policies to DROP
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT

# Allow loopback (critical for local services)
iptables -A INPUT -i lo -j ACCEPT

# Allow established connections
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# Allow SSH (We will harden this later)
iptables -A INPUT -p tcp --dport 22 -j ACCEPT

# Allow HTTP/HTTPS only if this is a web server
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT

# Log dropped packets (for auditing)
iptables -A INPUT -j LOG --log-prefix "IPTables-Dropped: "
Pro Tip: If you are managing a cluster, avoid manual iptables commands. Use configuration management tools like Ansible or SaltStack to enforce these rules idempotently across your fleet. A drift in firewall rules is a security hole waiting to open.

2. Mutual TLS (mTLS) for Application Identity

In a traditional setup, the web server trusts the database because the database IP is whitelisted. In Zero Trust, IP addresses are not identity. What if an attacker spoofs an IP?

We use Mutual TLS. The client must present a valid certificate to the server, and the server must present one to the client. This encrypts traffic and validates identity.

Here is a working configuration for Nginx (v1.14+) to enforce client certificate verification. This ensures that only your load balancer or specific microservices can talk to your backend API.

server {
    listen 443 ssl;
    server_name api.internal.coolvds.com;

    ssl_certificate /etc/nginx/certs/server.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;

    # The CA that signed your client certificates
    ssl_client_certificate /etc/nginx/certs/ca.crt;
    
    # ENABLE MUTUAL TLS
    ssl_verify_client on;

    location / {
        # Pass the client's Subject DN to the backend application
        proxy_set_header X-Client-DN $ssl_client_s_dn;
        proxy_pass http://localhost:8080;
    }
}

With ssl_verify_client on;, Nginx will instantly reject any connection that doesn't provide a certificate signed by your internal CA. It doesn't matter if the request comes from the local network or a hacked server; without the crypto key, it’s rejected.

3. Killing Static SSH Keys with SSH Certificate Authorities

Static SSH keys (id_rsa) are a nightmare. They get copied to laptops, forgotten in backups, and rarely rotated. If an engineer leaves the company, do you rotate every key on every server?

The solution is an SSH Certificate Authority. You sign a user's public key with a validity period (e.g., 4 hours). OpenSSH (v7.0+) handles this natively.

Step A: Create the CA on a secure machine

ssh-keygen -t rsa -b 4096 -f /etc/ssh/user_ca -C "CA"

Step B: Configure the Target Server (CoolVDS Instance)

Edit /etc/ssh/sshd_config to trust the CA public key:

TrustedUserCAKeys /etc/ssh/user_ca.pub

Step C: Sign a User Key

When a developer needs access, you sign their key with an expiration. This implies you have a secure process (or bastion) to run this command:

ssh-keygen -s /etc/ssh/user_ca -I user_id -n root,deploy -V +4h id_rsa.pub

The resulting id_rsa-cert.pub grants access for exactly 4 hours. After that, access is automatically revoked. No cleanup required.

The Infrastructure Layer: Why Virtualization Matters

You can configure all the software security you want, but if your underlying infrastructure is leaky, it’s pointless. This is particularly relevant with the "noisy neighbor" problem in container-based or shared-kernel virtualization (like OpenVZ).

This is why CoolVDS exclusively uses KVM (Kernel-based Virtual Machine). KVM provides hardware-assisted virtualization. Each VPS has its own kernel, its own memory space, and rigorous isolation from other tenants. In a Zero Trust model, you treat the network as hostile, but you must be able to trust your own kernel.

Feature Container VPS (LXC/OpenVZ) CoolVDS (KVM)
Kernel Isolation Shared (Risk of kernel panic/exploits) Isolated (Private Kernel)
Firewalling Limited (often no ipset/nftables support) Full Control (iptables, ipset, ebpf)
Resource Guarantees Soft limits (overselling common) Hard limits (Dedicated RAM/CPU)

Data Sovereignty and GDPR

We are operating in a post-2018 GDPR world. Privacy Shield is under constant scrutiny, and many European legal experts are already advising against relying solely on US-based cloud giants for sensitive data. Data residency is not just a checkbox; it's a legal shield.

Hosting your Zero Trust infrastructure on CoolVDS in our Norway data centers ensures your data stays within the EEA (or strictly Norway), governed by Norwegian law and the oversight of Datatilsynet. We offer low-latency connectivity to the NIX (Norwegian Internet Exchange), meaning your encrypted mTLS traffic doesn't have to hair-pin through Frankfurt or London to reach a user in Oslo.

Performance Considerations: NVMe is a Must

Encryption costs CPU cycles. TLS handshakes add latency. If you layer mTLS, SSH tunnels, and firewall inspection on top of slow spinning HDDs, your application will crawl.

Zero Trust architectures require high IOPS for logging and state management. We standardized on NVMe storage across our fleet specifically to handle the I/O overhead of modern security stacks without penalizing application performance. Don't let security kill your user experience.

Summary: Start Small

Moving to Zero Trust is a journey, not a switch flip. Start by isolating your database. Enable mTLS between your web server and your app server. Then, move to SSH certificates.

Security is about control. And to control your environment, you need infrastructure that respects your commands. Stop fighting with limited shared hosting environments.

Secure your perimeter today. Deploy a KVM-based, NVMe-powered instance on CoolVDS in under 60 seconds and start building a network you can actually trust.