Automating Server Hardening: Compliance Without the Headaches in a Post-Snowden World
Let’s be honest: if you are still manually editing /etc/ssh/sshd_config on every new server deployment, you aren't just wasting time—you are creating a liability. In the wake of the recent Snowden revelations, the definition of "secure" has shifted fundamentally. It is no longer just about keeping hackers out; it is about proving to auditors (and your customers) that your infrastructure is consistent, verifiable, and legally sound.
For CTOs operating in Norway and the broader EEA, the pressure is two-fold. You have the technical challenge of hardening the stack, and the legal nightmare of the EU Data Protection Directive (95/46/EC). The "Safe Harbor" agreement is looking shakier by the day. If your data resides on US-controlled clouds, you are exposing yourself to legal limbo. This is where the intersection of automation and data sovereignty becomes the only viable strategy.
The Fallacy of the "Golden Image"
In the past, we relied on "golden images"—snapshots of a perfectly configured server. But the moment you boot that image, it starts to drift. A quick patch here, a hotfix there, and suddenly your production environment is a snowflake. Unique, fragile, and impossible to audit.
The solution in 2014 is Infrastructure as Code. Whether you are using Puppet, Chef, or the rising star, Ansible, the goal is the same: the state of your server should be defined in a repo, not in the mind of your sysadmin. At CoolVDS, we see too many clients migrate to us after a security breach caused by a "forgotten" development server that missed a critical OpenSSL patch.
Practical Automation: Hardening with Ansible
Let's look at a practical example. We want to ensure every new CentOS 7 instance (which just hit stability) adheres to a baseline security policy immediately upon provisioning. We will use Ansible 1.7 because of its agentless architecture—no need to install a daemon on the target node.
Here is a playbook that disables root login, enforces SSH key authentication, and sets up basic iptables rules. This covers the low-hanging fruit for most Datatilsynet audits.
---
- hosts: webservers
user: root
vars:
ssh_port: 22
allowed_users: [ "deploy", "admin" ]
tasks:
- name: Ensure wheel group exists
group: name=wheel state=present
- name: Create deploy user with sudo privileges
user: name=deploy groups=wheel append=yes state=present
- name: Harden SSH configuration
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: present
with_items:
- { regexp: '^PermitRootLogin', line: 'PermitRootLogin no' }
- { regexp: '^PasswordAuthentication', line: 'PasswordAuthentication no' }
- { regexp: '^AllowUsers', line: 'AllowUsers {{ allowed_users | join(" ") }}' }
notify: restart ssh
- name: Flush iptables rules
command: iptables -F
- name: Default policy to DROP
command: iptables -P INPUT DROP
- name: Allow related and established connections
command: iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
- name: Allow SSH
command: iptables -A INPUT -p tcp --dport {{ ssh_port }} -j ACCEPT
- name: Save iptables
command: service iptables save
handlers:
- name: restart ssh
service: name=sshd state=restarted
Running this script ensures that within 30 seconds of spinning up a VPS, it is harder to crack than 90% of the servers on the web. No manual typing errors. No forgotten flags.
Virtualization Matters: OpenVZ vs. KVM
Automation manages the software layer, but the hypervisor dictates your isolation. This is critical for compliance.
Many budget providers in Europe still push OpenVZ containers. While cheap, OpenVZ shares the host kernel across all instances. If a kernel-level exploit is discovered (and 2014 has been a bad year for those), an attacker could theoretically escape their container and access your memory. For high-compliance industries—finance, healthcare, or anyone handling Norwegian citizen data—this is an unacceptable risk.
Pro Tip: Always checkuname -aon your VPS. If you see a kernel version that looks like2.6.32-042stab..., you are likely on OpenVZ. Demand KVM.
At CoolVDS, we standardized on KVM (Kernel-based Virtual Machine) years ago. KVM provides full hardware virtualization. Your memory is your memory. Your kernel is your kernel. You can load your own SELinux modules and configure `sysctl.conf` parameters that would be impossible in a containerized environment. This isolation is often a requirement for strict adherence to PCI-DSS 3.0 standards.
Optimizing for the Audit Trail
Compliance isn't just about being secure; it's about proving it. You need centralized logging. If a server is compromised, the local logs are untrustworthy. We recommend shipping logs instantly to a separate, secured logging instance.
Using rsyslog (standard on most distros), you can forward logs over TCP with disk-assisted queues to ensure no data is lost during network blips.
Client-Side Rsyslog Config (/etc/rsyslog.d/audit.conf)
$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as needed)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
# Send everything to the log server
*.* @@log.internal.coolvds.net:514
This setup ensures that even if your web server "melts" under a DDoS or intrusion attempt, the forensic evidence is safe on a separate node.
The Norwegian Advantage
Finally, we have to talk about latency and law. Physics is stubborn. If your user base is in Oslo, Bergen, or Trondheim, hosting in Frankfurt or London introduces measurable latency. We are talking 30-50ms vs 2-5ms.
But more importantly, the legal landscape is shifting. The Norwegian Personopplysningsloven (Personal Data Act) places strict requirements on how data is handled. By keeping your data in a Norwegian datacenter, you simplify your legal footing significantly. You aren't subject to the Patriot Act. You are dealing with known, local entities.
CoolVDS infrastructure is built on high-performance Enterprise SSDs located physically in Oslo. We combine the raw I/O performance required for modern database workloads (MySQL 5.6 benefits massively from random write speeds) with the legal certainty of domestic hosting.
Summary Checklist for the Pragmatic Admin
| Feature | Budget VPS (OpenVZ/HDD) | CoolVDS (KVM/SSD) |
|---|---|---|
| Kernel Isolation | Shared (Risky) | Dedicated (Secure) |
| Disk I/O | ~80 IOPS | ~50,000+ IOPS |
| Compliance Ready | Difficult | Native |
| Data Location | Unknown/Europe | Norway |
Security automation is not a luxury; it is the baseline for doing business in 2014. By combining Ansible for configuration management with the strict isolation of KVM, you build a fortress that is easy to manage and easy to audit.
Ready to harden your infrastructure? Stop fighting with noisy neighbors and slow disks. Deploy a KVM instance on CoolVDS today and experience the stability of true hardware isolation.