The Era of Manual Hardening is Over
If you are still securing your servers by SSH-ing in and pasting commands from a text file, you are already compromised. It is 2015. The complexity of modern infrastructure means that human error is the single biggest threat to your uptime and your legal standing.
For those of us operating in Norway, the stakes are higher. The Norwegian Personal Data Act (Personopplysningsloven) and the watchful eye of Datatilsynet do not care that you "meant" to close port 23. They care about documented, reproducible control. With the looming uncertainty regarding the EU-US Safe Harbor framework (thanks to the Snowden revelations), keeping data on sovereign Norwegian soil isn't just patriotism—it's risk management.
This is not about installing a firewall. This is about Infrastructure as Code (IaC). We need to define compliance state in code, apply it automatically, and sleep at night knowing our servers in Oslo aren't leaking customer data to a botnet.
The Toolchain: Ansible and SCAP
In the past, we relied on heavy agents like Puppet or Chef. While powerful, they require infrastructure to manage infrastructure. For the pragmatic architect in 2015, Ansible has emerged as the superior choice for security hardening. It is agentless, uses OpenSSH, and runs immediately.
To prove compliance, we combine this with the Security Content Automation Protocol (SCAP). Specifically, the OpenSCAP scanner on CentOS 7 allows us to validate our servers against the PCI-DSS or Draft STIG profiles automatically.
Practical Implementation: Securing SSH
Let's look at a concrete example. The default SSH configuration on most VPS images is too permissive. We need to disable root login, enforce Protocol 2, and ban empty passwords. Doing this manually on 50 servers is madness. Doing it via Ansible is governance.
Here is a snippet from a standard hardening playbook (compatible with Ansible 1.9, released last month):
---
- hosts: webservers
sudo: yes
vars:
ssh_port: 22
tasks:
- name: Ensure SSH Protocol 2 is enforced
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^Protocol'
line: 'Protocol 2'
state: present
notify: restart ssh
- name: Disable Root Login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: 'PermitRootLogin no'
state: present
notify: restart ssh
handlers:
- name: restart ssh
service: name=sshd state=restartedWhen you run this, you aren't just changing settings. You are enforcing a policy. If a junior dev changes PermitRootLogin to yes for debugging, the next Ansible run reverts it immediately. That is compliance.
The Infrastructure Layer: Why KVM Matters
Automation manages the OS, but the hypervisor manages the reality. In the budget hosting market, OpenVZ is popular because it allows providers to oversell resources. However, from a security and compliance standpoint, OpenVZ is a shared kernel environment. If the host kernel panics, or if there is a kernel-level exploit, container isolation can fail.
For strict data segregation—essential for handling Norwegian healthcare or financial data—you need full hardware virtualization. This is why CoolVDS exclusively utilizes KVM (Kernel-based Virtual Machine). With KVM, your memory and kernel are truly yours. No noisy neighbors stealing CPU cycles, and significantly higher barriers against cross-VM side-channel attacks.
Pro Tip: When auditing your provider, check their /proc/cpuinfo. If you see full virtualization extensions passed through, you are on KVM. If it looks like a stripped-down generic CPU, you might be in a container. Don't risk compliance on shared kernels.Performance penalties of Encryption
Compliance often requires disk encryption (LUKS) or heavy SSL/TLS termination. These operations are mathematically expensive. On legacy spinning rust (HDD), the I/O latency combined with encryption overhead can kill your application's response time.
In 2015, SSDs are shifting from luxury to requirement. However, not all flash storage is equal. To maintain low latency while running real-time encryption, we need high IOPS. CoolVDS implements enterprise-grade SSD arrays with optimized I/O schedulers (Deadline or Noop) to ensure that security doesn't come at the cost of speed.
Latency to Oslo
Finally, consider the physics. If your users are in Bergen, Trondheim, or Oslo, hosting in Frankfurt adds 20-30ms of round-trip latency. Hosting in the US adds 100ms+. For a static site, this is fine. For a database-driven application performing complex handshakes, that latency compounds.
By hosting on CoolVDS infrastructure located directly in Norway, you achieve two goals:
- Data Sovereignty: Your bytes stay within Norwegian borders, satisfying strict interpretations of the Personal Data Act.
- Performance: You get single-digit millisecond latency to your local user base.
Next Steps
Compliance is a moving target, but your infrastructure shouldn't be. Don't rely on "secure by default" promises. Script your security, audit your code, and deploy on isolation-first hardware.
Ready to harden your stack? Spin up a KVM instance on CoolVDS today and test your Ansible playbooks on true, isolated hardware with low latency to the Norwegian market.