The Firewall Lie We All Believed
For the last decade, we built infrastructure like medieval castles. We dug a moat (the firewall), pulled up the drawbridge (VPN), and assumed everyone inside the walls was a friend. We were wrong. In 2018, the threat isn't just outside; it's already inside. Maybe it's a compromised developer laptop, a rogue container, or a lateral movement attack that bypassed your edge router.
With GDPR fully enforceable as of May this year, and the Norwegian Data Protection Authority (Datatilsynet) sharpening its teeth, relying on a perimeter firewall is negligence. If you are running high-performance workloads in Norway, you need to shift to a Zero-Trust model.
Iβve spent the last month auditing a client's infrastructure after a breach. They had a strong firewall. But once the attacker found an open Redis port on the internal network, it was game over. Here is how you lock down your Linux environment so that never happens to you.
Principle 1: Identity is the New Perimeter (Hardening SSH)
The first step in Zero-Trust is assuming your local network is hostile. We do not trust IP addresses; we trust cryptographic identities. If you are still logging into your servers with a password, you are failing.
On our CoolVDS KVM instances, we recommend completely disabling password authentication immediately after provisioning. But let's go further. In 2018, SSH keys are the baseline, but Multi-Factor Authentication (MFA) is the standard for critical systems.
The Configuration
First, install the Google Authenticator PAM module. On a CentOS 7 or Ubuntu 18.04 LTS system:
sudo apt-get install libpam-google-authenticator
google-authenticator
Follow the prompts. Then, edit your /etc/ssh/sshd_config. We are going to force a specific authentication chain: PubKey + TOTP code.
# /etc/ssh/sshd_config
# Disallow root login
PermitRootLogin no
# Disable password auth completely
PasswordAuthentication no
ChallengeResponseAuthentication yes
# The magic sauce: Key AND Code required
AuthenticationMethods publickey,keyboard-interactive
Finally, configure PAM in /etc/pam.d/sshd:
# /etc/pam.d/sshd
auth required pam_google_authenticator.so
Restart SSH. Now, even if an attacker steals your private key, they cannot access your server without the rotating code on your phone. This adds insignificant latency to your login but massive friction for an attacker.
Principle 2: Mutual TLS (mTLS) for Internal Services
In a traditional setup, your web server trusts the database because it's on 10.0.0.5. In Zero-Trust, we don't care about the IP. We authenticate the service itself.
For web applications communicating internally (e.g., a microservice architecture or a monitoring dashboard), use Nginx with Mutual TLS. This requires the client to present a valid certificate to the server. It is far more robust than IP whitelisting.
Pro Tip: Generating your own CA (Certificate Authority) sounds scary, but in 2018 it's a few OpenSSL commands. Keep the CA key offline (air-gapped USB drive). You only need it when issuing new client certs.
Nginx mTLS Configuration
Here is a snippet for your Nginx configuration block. This ensures that only clients holding a certificate signed by your internal CA can even complete the TLS handshake.
server {
listen 443 ssl;
server_name internal-api.coolvds-hosted.com;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
# The CA that signed the client certificates
ssl_client_certificate /etc/nginx/certs/internal-ca.crt;
# This is the kill switch. If verification fails, connection drops.
ssl_verify_client on;
location / {
proxy_pass http://localhost:8080;
}
}
If a rogue process tries to `curl` this endpoint without the cert, Nginx drops the connection before the application logic is even touched. Efficient and brutal.
Principle 3: Micro-Segmentation with iptables
CoolVDS provides a high-speed private network between your VPS instances. However, do not treat this private network as a "safe zone." Use iptables to explicitly allow only necessary traffic.
We see too many developers flushing iptables because "it's too hard to debug." Use the LOG target instead of dropping packets silently while debugging.
# Default Policy: DROP everything
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
# Allow Loopback
iptables -A INPUT -i lo -j ACCEPT
# Allow established connections (crucial!)
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Allow SSH (from specific IP if possible, otherwise open)
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Allow specific internal traffic (e.g. Database port 3306 only from Web Server IP)
iptables -A INPUT -p tcp -s 10.8.0.5 --dport 3306 -j ACCEPT
# Log dropped packets (limits log flooding)
iptables -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
This configuration ensures that even if a neighbor on the private subnet is compromised, they cannot probe your database ports.
Why Infrastructure Choice Matters in 2018
Software configuration can only go so far. Earlier this year, the Spectre and Meltdown vulnerabilities shook the industry. They proved that hardware isolation matters. Shared hosting and container-based VPS (like OpenVZ) often share the kernel and memory space too aggressively.
This is why we strictly use KVM virtualization at CoolVDS. KVM provides kernel-level isolation. Your memory is yours. Your CPU instructions are yours. When you are fighting for compliance with GDPR and protecting user data in Norway, you cannot afford the "noisy neighbor" security risks inherent in cheaper virtualization technologies.
Performance vs. Security?
The common argument is that encryption kills performance. "I can't run TLS inside my LAN; the latency will be too high."
| Metric | Plain HTTP (Internal) | mTLS (Internal) | Impact |
|---|---|---|---|
| Handshake (First connect) | ~1ms | ~15ms | Noticeable |
| Throughput (Keep-Alive) | 940 Mbps | 925 Mbps | Negligible |
| CPU Load (AES-NI) | 2% | 4% | Low |
With modern CPUs supporting AES-NI instructions (standard on our NVMe nodes), the overhead of internal encryption is negligible for long-lived connections. The security trade-off is absolutely worth it.
The Norwegian Context: Data Sovereignty
Zero-Trust is not just about hackers; it's about legal trust. Under GDPR, you are the Data Controller. You are responsible for where that data lives.
Hosting on major US cloud providers often introduces legal grey areas regarding the CLOUD Act. By keeping your data on CoolVDS servers physically located in Oslo, connected directly to NIX (Norwegian Internet Exchange), you simplify your compliance map. You know exactly where the bits are.
Next Steps
Stop trusting your network. Start verifying every packet. Security is not a product you buy; it is a process you adhere to.
If you are ready to build a hardened infrastructure with true KVM isolation and low latency to the Nordic market, deploy a CoolVDS instance today. Test the IOPS, configure your mTLS, and sleep better knowing your architecture is solid.