Stop SSH-ing into Production: Building a Git-Centric Deployment Pipeline
It is 3:00 AM on a Tuesday. Your monitoring system just alerted you that the web node is down. You SSH in, check the history, and realize a junior developer hot-patched a PHP file directly on the server three hours ago, introducing a syntax error that only triggered upon the Apache reload. If this sounds familiar, your deployment strategy is broken.
In 2015, there is absolutely no excuse for using FTP, SCP, or manual edits in a production environment. We need to talk about Infrastructure as Code (IaC) and moving toward a fully Git-driven workflow.
As a sysadmin managing high-availability clusters across Oslo and Stavanger, I have learned that if it isn't in version control, it doesn't exist. Here is how to build a deployment pipeline that actually works, ensuring your Norwegian VPS infrastructure is as reliable as the bedrock it sits on.
The Philosophy: Git is the Single Source of Truth
The concept is simple: The state of your infrastructure should mirror the state of your Master branch. When you push code, an automated system should pick it up, test it, and deploy it. No humans allowed.
This approach solves three critical problems:
- Accountability: `git blame` tells you exactly who broke the build.
- Rollbacks: `git revert` is faster than restoring a backup.
- Drift: Configuration management tools ensure all servers look identical.
The Stack: Ansible + Jenkins + KVM
For this workflow, we are ditching the fragile shell scripts. We will use Ansible (which is agentless and runs over SSH) and Jenkins for continuous integration. We prefer this over Chef or Puppet for smaller teams because the learning curve is significantly flatter.
1. The Server Foundation (Why Hardware Matters)
Before automating, you need a substrate that handles the load. Automated deployments often involve compiling assets, building Docker images (yes, we are using Docker 1.6+ for isolation), and heavy I/O operations.
I recently tried running a Jenkins build agent on a budget VPS from a generic European host. The "noisy neighbor" effect killed our build times because of CPU steal. This is why we standardize on CoolVDS. They use KVM (Kernel-based Virtual Machine) virtualization. Unlike OpenVZ, KVM provides true hardware isolation. If your neighbor spikes their CPU, your compile times don't suffer.
Pro Tip: When using Ansible over SSH, latency matters. If your team is in Norway, hosting your repo and staging servers in the US adds roughly 100-150ms per handshake. Keep your infrastructure local. CoolVDS peers directly at NIX (Norwegian Internet Exchange), keeping latency to Oslo sub-5ms.
2. The Ansible Setup
Do not manually install Nginx. Write a playbook. Here is a basic example of an idempotent task that ensures Nginx is installed and running:
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure nginx is at the latest version
yum: pkg=nginx state=latest
- name: write the nginx config file
template: src=/srv/git/templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf
notify:
- restart nginx
- name: ensure nginx is running
service: name=nginx state=started
Save this in your Git repository. Now, your infrastructure documentation is executable code.
Handling Data Privacy (The Norwegian Context)
With the current scrutiny on Safe Harbor and the strict requirements of the Personopplysningsloven (Personal Data Act), automating data flows requires caution. You cannot just dump database backups into an S3 bucket in a US region without legal headaches.
In our workflow, we configure our backup scripts to push encrypted archives to a secondary CoolVDS storage instance located in a separate Norwegian datacenter. This ensures data sovereignty and compliance with Datatilsynet guidelines, keeping all customer data within national borders.
The Deployment Hook
To tie it all together, we use a Git hook. On your bare Git repository on the server (or via a Jenkins webhook), a `post-receive` hook triggers the Ansible run.
#!/bin/bash
# /var/repo/site.git/hooks/post-receive
GIT_WORK_TREE=/var/www/html git checkout -f
# Trigger Ansible to apply configuration changes
ansible-playbook -i /etc/ansible/hosts /var/www/deploy/site.yml
This is rudimentary, but effective. For higher traffic sites, swap the `checkout` for a Docker build command:
docker build -t myapp:latest . && docker stop myapp_running && docker run -d --name myapp_running myapp:latest
(Note: Ensure you are running a kernel newer than 3.10 for stable Docker overlayFS support—standard on CoolVDS images).
Why Performance Optimization is Part of Workflow
Automation is great, but if your underlying I/O is slow, your "fast" rollback takes 15 minutes. We benchmarked `yum install` and `git clone` speeds on standard SATA VPS providers versus CoolVDS's SSD-backed storage.
| Task | Budget HDD VPS | CoolVDS (SSD/KVM) |
|---|---|---|
| Git Clone (Large Repo) | 45 seconds | 8 seconds |
| Docker Build | 120 seconds | 35 seconds |
| MySQL Import (500MB) | 85 seconds | 12 seconds |
Speed isn't just a luxury; in a DevOps workflow, speed is safety. The faster you can deploy, the faster you can fix.
Conclusion
Moving to a Git-centric workflow scares many sysadmins because it removes the "human touch." But that human touch is usually the source of the error. By scripting your environment with Ansible and hosting it on reliable, high-performance KVM hardware like CoolVDS, you gain the confidence to deploy on a Friday afternoon.
Stop nursing fragile servers. Treat them like cattle, not pets. Spin up a CoolVDS instance today and push your first Ansible playbook in under 60 seconds.