Console Login

Building a Zero-Trust Network on Linux: Why Your Perimeter Firewall is Obsolete (2019 Guide)

Zero-Trust Implementation Guide for Nordic Infrastructure

Let’s be honest: the traditional "castle and moat" security strategy is a failure. If you are still relying on a single edge firewall to protect your soft, unencrypted internal network, you are one phishing email away from a total breach. I have cleaned up after enough rootkits to know that once an attacker breaches the perimeter, they usually have free reign to jump from your web server to your database without resistance.

It is late 2019. We are seeing a massive shift in how we architect infrastructure in Europe, driven partly by paranoia and partly by the Datatilsynet (Norwegian Data Protection Authority) cracking down on GDPR violations. The solution is Zero-Trust. The principle is simple: never trust, always verify. It doesn't matter if the request comes from the internet or from 127.0.0.1. Treat every packet as hostile.

This guide walks through hardening a Linux VPS environment (Ubuntu 18.04 LTS) to align with Zero-Trust principles. We will cover identity-aware SSH, micro-segmentation, and mutual TLS.

1. Identity is the New Perimeter: Hardening SSH

The first step in Zero-Trust is ensuring that access is tied to strict identity, not network location. Default SSH configurations are often too permissive. We need to disable password authentication entirely and enforce key-based auth, preferably with a hardware token or MFA if you are managing critical PII (Personally Identifiable Information).

On your CoolVDS instance, open your SSH daemon config:

nano /etc/ssh/sshd_config

You need to enforce the following baseline. This prevents brute-force attacks and ensures that stolen credentials alone aren't enough to compromise the server.

# /etc/ssh/sshd_config - Hardened Configuration 2019

# Disallow root login remotely. Su to root only when needed.
PermitRootLogin no

# Disable password auth. Keys are mandatory.
PasswordAuthentication no
ChallengeResponseAuthentication no

# Limit authentication retries to prevent log spamming
MaxAuthTries 3

# Whitelist specific users only
AllowUsers deployer sysadmin

# Use modern crypto algorithms (removes weak curves)
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com

After saving, restart the service. Do not close your current session until you verify the new config works in a separate terminal window.

systemctl restart sshd
Pro Tip: For high-compliance environments (like fintech applications hosted in Oslo), consider implementing Google Authenticator PAM module for 2FA on SSH. It adds a latency of about 5 seconds to login, but the security ROI is massive.

2. Micro-segmentation: Starving the Lateral Movement

In a traditional setup, if your web server is compromised, the attacker can port scan your entire private network. Zero-Trust dictates that the web server should only be able to talk to the specific database port it needs, and nothing else. Not even ICMP ping if it's not required.

While `iptables` is the raw standard, `ufw` (Uncomplicated Firewall) on Ubuntu 18.04 is sufficient for host-based micro-segmentation if configured with a "default deny" policy.

First, set the defaults to reject everything:

ufw default deny incoming ufw default deny outgoing

This breaks everything. Now, we selectively punch holes. If this is a web node, it needs HTTP/HTTPS in, and it needs to talk to the database node (let's say 10.0.0.5) on port 3306.

# Allow SSH from your specific management IP only (Recommended)
ufw allow from 192.168.1.50 to any port 22

# Allow Web Traffic
ufw allow 80/tcp
ufw allow 443/tcp

# ALLOW OUTGOING specific connections only (Micro-segmentation)
# Allow connection to the database server on private LAN
ufw allow out to 10.0.0.5 port 3306 proto tcp

# Allow DNS lookups (crucial, or apt-get fails)
ufw allow out 53

# Allow NTP for time sync
ufw allow out 123

Enable the firewall:

ufw enable

By restricting outgoing traffic, you prevent a reverse shell from easily connecting back to an attacker's command-and-control server. Most VPS providers ignore egress filtering. At CoolVDS, we encourage it. It stops your infrastructure from becoming part of a botnet if you do get breached.

3. Encrypting East-West Traffic with Mutual TLS (mTLS)

The days of terminating SSL at the load balancer and sending plain text HTTP to the backend are over. If an attacker is on your network sniffing packets (tcpdump), they shouldn't see customer data. We use Mutual TLS so services authenticate each other via certificates.

Here is how to configure Nginx (v1.14+) to verify client certificates. This ensures your app server only accepts requests from your authorized load balancer, not from a rogue script on the network.

First, generate your internal CA and certificates using `openssl`. Then, configure your backend Nginx block:

# /etc/nginx/sites-available/backend-service

server {
    listen 443 ssl;
    server_name api.internal.coolvds.net;

    # Server Certificate
    ssl_certificate /etc/nginx/certs/server.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;

    # Mutual TLS Configuration
    # This forces the CLIENT (Load Balancer) to present a valid cert signed by our Internal CA
    ssl_client_certificate /etc/nginx/certs/internal-ca.crt;
    ssl_verify_client on;

    location / {
        proxy_pass http://localhost:8080;
        # Pass SSL details to the application layer if needed
        proxy_set_header X-SSL-Client-Serial $ssl_client_serial;
        proxy_set_header X-SSL-Client-Verify $ssl_client_verify;
    }
}

Test the configuration syntax:

nginx -t

If you fail to present the certificate, Nginx drops the connection immediately. This renders port scanning virtually useless against your internal API endpoints.

The Hardware Foundation Matters

Software hardening is futile if the underlying virtualization layer is leaky. In 2019, we still see providers overselling OpenVZ containers where kernel exploits can bleed across tenants. This is unacceptable for a Zero-Trust model.

We built CoolVDS on KVM (Kernel-based Virtual Machine). This provides strict hardware virtualization. Your RAM is yours; your kernel is yours. We map NVMe storage directly to instances to prevent I/O wait stealing, which is common in shared hosting environments. When you are running encryption on every packet (mTLS) and strict firewall rules, CPU and I/O overhead increases. You need raw performance to offset the security cost.

Data Sovereignty in Norway

With GDPR fines reaching millions of Euros, where your data sits physically is a legal compliance issue. Hosting on US-owned clouds subjects your data to the CLOUD Act. CoolVDS infrastructure is located in Oslo. Your data stays in Norway. For European dev teams, this simplifies your compliance audit trail significantly.

Next Steps

Zero-Trust is not a product; it is a mindset. Start by locking down SSH, then move to network segmentation. Do not wait for a breach to force your hand.

If you need a testbed to simulate these rigorous security policies without risking your production environment, spin up a KVM instance. Our NVMe-backed nodes are designed to handle the encryption overhead without latency spikes.

Deploy a Secure KVM Instance on CoolVDS (Oslo Region) Now