Console Login

Compliance as Code: Automating Security for the Pending GDPR Shift

Compliance as Code: Automating Security for the Pending GDPR Shift

Let’s be honest: looking at logs isn't security. It's archeology. If you are still manually hardening servers by editing /etc/ssh/sshd_config with vi, you are already non-compliant. The recent invalidation of Safe Harbor and the adoption of the General Data Protection Regulation (GDPR) in April have fundamentally shifted the landscape for Norwegian infrastructure. We have less than two years until the 2018 enforcement deadline, and Datatilsynet (The Norwegian Data Protection Authority) is not known for its leniency.

The days of "security through obscurity" are over. We entered the era of Compliance as Code. If your infrastructure documentation doesn't match the actual state of your server, you are exposed. I've spent the last month migrating a healthcare client from a non-compliant US-based cloud back to Norwegian soil because the new Privacy Shield framework (barely a month old) feels shaky at best.

In this guide, I will walk you through automating security baselines using Ansible 2.1 on CentOS 7. This isn't theoretical. This is the exact playbook structure we use to ensure every CoolVDS instance meets CIS Benchmarks before it ever sees production traffic.

The "Drift" Problem

I once audited a financial services setup in Oslo. They had a pristine "Gold Image" for their VPS deployment. Security was tight. Root login disabled. SSH keys only. IPTables locked down. But six months later, a junior dev had enabled password auth on three nodes to debug a database connectivity issue and forgot to revert it.

That configuration drift left a hole wide enough to drive a truck through. Automated configuration management isn't just about deploying apps; it's about enforcing state. If a file changes, the automation changes it back. That is compliance.

Step 1: The Base Hardening Playbook

We are going to use Ansible. It’s agentless, which makes it perfect for managing a fleet of VPS instances without the overhead of a Chef client or Puppet agent eating up RAM. We want to target the basics: SSH, Sysctl networking, and package updates.

Here is a foundational playbook structure for a CentOS 7 system:

---
- hosts: all
  become: yes
  vars:
    ssh_port: 22
    allowed_users: [ "deploy", "admin" ]

  tasks:
    - name: Ensure latest security patches are installed
      yum:
        name: '*'
        state: latest
        security: yes

    - name: Secure SSH Configuration
      lineinfile:
        dest: /etc/ssh/sshd_config
        regexp: "{{ item.regexp }}"
        line: "{{ item.line }}"
        state: present
        validate: 'sshd -t -f %s'
      with_items:
        - { regexp: '^PasswordAuthentication', line: 'PasswordAuthentication no' }
        - { regexp: '^PermitRootLogin', line: 'PermitRootLogin no' }
        - { regexp: '^Protocol', line: 'Protocol 2' }
        - { regexp: '^X11Forwarding', line: 'X11Forwarding no' }
      notify: restart ssh

  handlers:
    - name: restart ssh
      service: name=sshd state=restarted

Notice the validate argument. This prevents you from deploying a broken config that locks you out of your own server—a mistake I've seen take down production environments more times than I care to admit. On CoolVDS, you have VNC console access to save you, but let’s try not to need it.

Step 2: Network Stack Hardening

Linux defaults are designed for compatibility, not security. For a server exposed to the public internet, we need to mitigate common attacks like SYN floods and ICMP redirects. We modify the kernel parameters via sysctl.

Add this to your Ansible tasks:

    - name: Harden Network Stack via Sysctl
      sysctl:
        name: "{{ item.key }}"
        value: "{{ item.value }}"
        state: present
        reload: yes
      with_dict:
        net.ipv4.conf.all.accept_redirects: 0
        net.ipv4.conf.all.send_redirects: 0
        net.ipv4.conf.all.accept_source_route: 0
        net.ipv4.tcp_syncookies: 1
        net.ipv4.tcp_max_syn_backlog: 2048
        net.ipv4.conf.all.log_martians: 1
Pro Tip: Enabling log_martians is incredibly useful for spotting spoofed IP packets in your logs. If you see your logs filling up with Martian sources, your firewall is doing its job, but your upstream provider might be leaking garbage traffic. We filter this at the edge on our network, but host-level verification is mandatory for banking-grade setups.

Step 3: Firewalld vs. Iptables

In CentOS 7, firewalld is the default abstraction over netfilter. While some purists stick to raw iptables scripts, firewalld supports dynamic zones which are excellent for complex setups. However, for a standard web server, we want to explicitly whitelist services.

# Manual verification command
firewall-cmd --permanent --remove-service=dhcpv6-client
firewall-cmd --permanent --remove-service=cockpit
firewall-cmd --permanent --add-service=http
firewall-cmd --permanent --add-service=https
firewall-cmd --reload

Translating this to Ansible ensures that if a package installation opens a new port (like a database default port), your automation closes it immediately on the next run.

Step 4: Audit with OpenSCAP

Automation is great, but how do you prove compliance to an auditor? OpenSCAP is the standard here. It scans your system against the XCCDF (Extensible Configuration Checklist Description Format) profiles.

Install the scanner:

yum install openscap-scanner scap-security-guide

Run a scan against the standard profile:

oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_standard \
 --results /var/www/html/scan-report.xml \
 --report /var/www/html/scan-report.html \
 /usr/share/xml/scap/ssg/content/ssg-centos7-ds.xml

This generates an HTML report showing exactly where you pass and fail. Green is good. Red is a liability. You can script this to run weekly and email the results to your CISO.

The Infrastructure Reality Check

You can have the most hardened OS in the world, but if the underlying virtualization is weak, you are building a fortress on sand. This is where the choice of hosting provider becomes a compliance issue.

Many budget providers use container-based virtualization (like older OpenVZ implementations) where the kernel is shared among all tenants. If a vulnerability exists in that shared kernel, a neighbor could theoretically escape their container and access your memory space. In the context of GDPR and sensitive personal data, this is an unacceptable risk.

Why KVM is the Standard

We strictly use KVM (Kernel-based Virtual Machine) for all CoolVDS instances. KVM provides hardware-assisted virtualization. Your OS kernel is yours alone. It is isolated. The memory pages are distinct.

Feature Container (LXC/OpenVZ) CoolVDS (KVM)
Kernel Isolation Shared (High Risk) Dedicated (Secure)
SELinux Support Limited/Disabled Full Control
Custom TCP Stacks No Yes (Required for VPNs/Docker)

Furthermore, data sovereignty is now critical. With the legal limbo of US transfers, keeping data physically in Norway eliminates a massive layer of legal complexity. Latency from Oslo to the rest of the country is effectively negligible—often under 10ms—which keeps your application snappy while keeping your legal team happy.

Final Thoughts

The transition to GDPR compliance won't happen overnight, but waiting until 2018 is a strategy for failure. Start automating your security baselines now. Treat your infrastructure as code. And ensure your code runs on hardware that respects the isolation your data requires.

Don't let legacy infrastructure compromise your compliance posture. Deploy a fully isolated, KVM-based instance on CoolVDS today and start building your Ansible inventory on solid ground.