Kill the DMZ: Why Your Internal Network is a Liability
Let’s be honest: the "soft gooey center" of your infrastructure is a ticking time bomb. For the last decade, sysadmins have built fortresses with heavy firewalls at the perimeter, assuming that once a packet is inside the LAN, it's friendly. This is the "Castle and Moat" mentality, and frankly, it is failing. In 2013, with APTs (Advanced Persistent Threats) on the rise, if one server in your cluster gets popped, the attacker has free reign over your database, your backups, and your internal APIs.
It is time to adopt a Zero-Trust model. John Kindervag at Forrester has been shouting this for a few years, but few are listening. The concept is simple: verify everything, trust nothing. Treat your internal eth1 private network exactly like the hostile public internet.
I’m going to show you how to lock down a Linux environment hosted in Oslo so tightly that even if someone gets root on your web node, they can’t touch the database. We will use tools available right now: iptables, OpenVPN, and strict SSH key management.
The Fallacy of the "Trusted LAN"
Most VPS providers in Norway hand you a private IP range and tell you it's secure. It isn't. On shared layer-2 networks, ARP spoofing and packet sniffing are real threats if the isolation isn't perfect. Furthermore, if you are running a standard LAMP stack, a SQL injection on the web server shouldn't grant network-level access to the database admin port.
Here is the architecture we are building on CoolVDS (which uses KVM, ensuring real kernel isolation unlike the leaky containment of OpenVZ):
- Default Policy: DROP everything, everywhere.
- Encryption: SSL/TLS for internal traffic, not just external.
- Authentication: 2FA for SSH, even inside the LAN.
Step 1: Host-Based Firewalls (iptables)
Forget the edge firewall for a second. Every single server needs its own firewall. We aren't just blocking port 80; we are locking down the private interface.
Here is a battle-tested iptables configuration for a Database node (MySQL) that only accepts connections from a specific Web node IP (e.g., 10.10.0.5) and drops everything else. Note the logging—essential for auditing compliance with the Norwegian Personal Data Act (Personopplysningsloven).
# Flush existing rules
iptables -F
# Default policies: Block everything
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
# Allow loopback
iptables -A INPUT -i lo -j ACCEPT
# Allow established connections (so yum update works)
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow SSH only from the VPN Gateway IP (Management Node)
iptables -A INPUT -p tcp -s 10.10.0.100 --dport 22 -j ACCEPT
# Allow MySQL ONLY from the specific Web Node
iptables -A INPUT -p tcp -s 10.10.0.5 --dport 3306 -j ACCEPT
# Log denied attempts (Crucial for Datatilsynet audits)
iptables -A INPUT -j LOG --log-prefix "IPTables-Dropped: " --log-level 4
# Save rules
/sbin/service iptables save
Pro Tip: On CoolVDS KVM instances, you have full control over conntrack modules. If you are pushing high traffic, tune /proc/sys/net/netfilter/nf_conntrack_max to avoid dropping packets during traffic spikes. The default value in CentOS 6 is often too low for high-performance apps.
Step 2: Encrypting the "Safe" Traffic
In a Zero-Trust environment, cleartext is forbidden. Yes, even between your app server and your database. If an intruder manages to sniff the VLAN, they should see garbage.
For MySQL 5.5 (standard in 2013), you must explicitly enable SSL. It adds some CPU overhead, but on modern Xeon processors (which we use exclusively), the latency impact is negligible compared to the security gain.
In your my.cnf:
[mysqld]
ssl-ca=/etc/mysql/certs/ca-cert.pem
ssl-cert=/etc/mysql/certs/server-cert.pem
ssl-key=/etc/mysql/certs/server-key.pem
And in your PHP connection script (PDO):
$pdo = new PDO(
'mysql:host=10.10.0.6;dbname=production',
'db_user',
'db_pass',
array(
PDO::MYSQL_ATTR_SSL_KEY => '/etc/certs/client-key.pem',
PDO::MYSQL_ATTR_SSL_CERT => '/etc/certs/client-cert.pem',
PDO::MYSQL_ATTR_SSL_CA => '/etc/certs/ca-cert.pem'
)
);
Step 3: The Management Gateway (OpenVPN)
Never expose SSH (port 22) to the public internet on your database or backend servers. Instead, use a "Bastion Host" or Jump Box combined with OpenVPN. This provides a single choke point for entry.
We configure OpenVPN 2.3 to require both certificates and user credentials. This ensures that even if a developer's laptop is stolen, the thief cannot access the infrastructure without the cert and the password.
Server Config (server.conf):
port 1194
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
server 10.8.0.0 255.255.255.0
push "route 10.10.0.0 255.255.255.0" # Push route to internal private network
cipher AES-256-CBC
auth SHA512
user nobody
group nobody
persist-key
persist-tun
status openvpn-status.log
verb 3
Step 4: Two-Factor Authentication (2FA) for SSH
Passwords are dead. SSH Keys are good, but SSH Keys + Time-Based One-Time Passwords (TOTP) are better. We use the google-authenticator PAM module.
First, install the PAM module on your bastion host:
yum install google-authenticator
Then, edit /etc/pam.d/sshd and add this line at the top:
auth required pam_google_authenticator.so
Finally, modify /etc/ssh/sshd_config to force the challenge:
ChallengeResponseAuthentication yes
UsePAM yes
Now, when you SSH into your CoolVDS management node, you need your RSA key AND the code from the Google Authenticator app on your smartphone. Even if a keylogger captures your keystrokes, they can't reuse the token.
Why Infrastructure Matters
Implementing this level of packet filtering and encryption requires CPU cycles. If you try this on a cheap, oversold VPS where the host is stealing 50% of your CPU time (check your %st in top), your latency will skyrocket. This is where the hardware underneath matters.
At CoolVDS, we don't oversell. We use dedicated KVM slices. When you implement full AES-256 encryption on your internal network, our hardware-assisted virtualization (VT-x) ensures that the encryption overhead doesn't slow down your application. Plus, with our datacenters located directly in Oslo connected to NIX, your latency to Norwegian users is already 15-20ms lower than hosting in Germany or the UK.
Security is not a product; it is a process. But it starts with architecture. Stop trusting your LAN. Start verifying every packet.
Ready to harden your stack? Deploy a KVM instance on CoolVDS today and get root access in under 60 seconds.