The Era of Manual SSH is Over
If you are still SSH-ing into servers one by one to run yum update, you are doing it wrong. In 2013, the margin for error in systems administration is zero. One typo in a shell script loop can wipe out a cluster. I've seen it happen. A missing variable in a bash script took down a Magento cluster for six hours because the rm -rf command didn't have the right path context.
We have options now. Puppet and Chef have been around, but they are heavy. They require Ruby, they require agents, and they eat up RAM that your application needs. If you are running a lean 512MB or 1GB VPS, you don't want a Java or Ruby process sitting there idle, eating 10% of your memory.
Enter Ansible. It’s written in Python, it uses OpenSSH, and it requires zero software installed on the remote node. It is the pragmatic choice for the battle-hardened admin who cares about stability.
Agentless Architecture: Why It Matters for Performance
The beauty of Ansible is its simplicity. It pushes modules to your nodes, executes them, and removes them. No daemons. No background processes.
When we provision high-performance instances at CoolVDS, we prioritize KVM virtualization. Why? Because KVM gives you a real kernel. It behaves like bare metal. When you combine KVM with Ansible, you get a pure management pipeline that doesn't suffer from the "noisy neighbor" effect often found in OpenVZ containers.
The Setup (CentOS 6.4)
Getting started takes about 30 seconds. You only install Ansible on your control machine (your laptop or a management jump box). You do not install it on the servers.
$ sudo easy_install pip
$ sudo pip install ansible
Next, define your inventory in /etc/ansible/hosts. If you are managing a cluster in our Oslo datacenter, grouping them by function is critical for latency management.
[webservers]
192.168.1.50
192.168.1.51
[dbservers]
192.168.1.60
Writing Your First Playbook
Forget complex XML. Ansible uses YAML. It is readable by humans. Here is a playbook to install Nginx and ensure it starts on boot. This works perfectly on the CentOS 6 images provided by CoolVDS.
---
- hosts: webservers
user: root
tasks:
- name: Ensure EPEL repo is present
command: rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
ignore_errors: yes
- name: Install Nginx
yum: name=nginx state=installed
- name: Start Nginx Service
service: name=nginx state=started enabled=yes
Save this as websetup.yml and run it:
$ ansible-playbook websetup.yml
Idempotency: The Safety Net
The most important concept here is idempotency. You can run that playbook one time or one thousand times. The result is the same. If Nginx is already installed, Ansible does nothing. This prevents the "configuration drift" that plagues manual administration.
Pro Tip: When using SSH keys for Ansible, ensure your control machine uses a persistent connection. In your~/.ssh/config, enableControlMaster. This reduces the handshake overhead. On CoolVDS instances, which already have optimized network stacks, this makes playbook execution feel instantaneous.
Data Privacy and Control
Operating in Norway involves strict adherence to the Personopplysningsloven (Personal Data Act). Data processing agreements are serious business. By using Ansible, you document your infrastructure as code. You can prove exactly who has access to what, and which packages are installed.
If the Datatilsynet (Data Inspectorate) knocks on your door, showing them a Git repository with your Ansible playbooks is infinitely better than saying, "I think Bob patched that server last week."
Hardware Matters
Automation is only as fast as the disk it writes to. You can write the best Ansible playbooks in the world, but if your host is running on spinning rust (HDDs) with high seek times, `yum install` will drag.
This is where the infrastructure choice becomes technical. We use Pure SSD arrays in RAID 10 at CoolVDS. Not cached. Pure flash storage. When Ansible pushes a configuration to 50 servers simultaneously, the I/O demand spikes. On standard hosting, the disk queue creates a bottleneck. On our SSD KVM instances, the I/O wait is negligible.
Comparison: Package Install Time (Apache + MySQL)
| Infrastructure | Time to Provision | I/O Wait % |
|---|---|---|
| Standard VPS (SATA/SAS) | ~4 minutes | 15-20% |
| CoolVDS (Pure SSD) | ~45 seconds | < 1% |
Conclusion
It is 2013. Stop manually editing my.cnf. Stop trusting your memory for firewall rules. Automation is not just for the giants like Facebook or Google; it is for anyone who values their sleep.
Switch to an agentless workflow. Document your infrastructure. And run it on hardware that doesn't choke when you push the enter key.
Ready to test your playbook? Deploy a high-performance SSD KVM instance in Oslo on CoolVDS today.