Stop SSH-ing into Production: The Git-Driven Infrastructure Workflow
It is 3:00 AM on a Tuesday. Your monitoring system is screaming because the load balancer just decided to reject connections. Why? Because three weeks ago, a junior developer SSH'd into the live server and manually tweaked /etc/nginx/nginx.conf to fix a "temporary" issue. That change was never documented, never versioned, and just got overwritten by your latest automated deployment script.
If this scenario sounds familiar, your workflow is broken. In 2015, there is absolutely no excuse for maintaining servers by hand. The era of the "Pet" server is over; we are managing Cattle now.
At CoolVDS, we see this daily. Customers migrate to our high-performance KVM instances but bring their fragile, manual workflows with them. Today, we are discussing the Git-Centric Workflow (what some industry leaders are starting to call "Operations by Pull Request"). This is how you stabilize your infrastructure, reduce Total Cost of Ownership (TCO), and sleep through the night.
The Single Source of Truth
The core philosophy is simple: If it is not in Git, it does not exist.
Your server configuration, your firewall rules, and your application binaries must be defined in code. When you need to increase the worker_connections in Nginx or change a MySQL buffer pool size, you don't use vim on the server. You edit a file in your repository, commit it, and let a machine apply the change.
The Stack: Git, Jenkins, and Ansible
While Docker is gaining massive traction (version 1.7 just dropped), many production environments in Europe still rely on robust configuration management for the host OS. Here is the battle-tested setup we recommend for deploying to a VPS in Norway:
- Version Control: Git (hosted on GitLab or GitHub).
- CI Server: Jenkins (the workhorse).
- Configuration Management: Ansible (agentless and simple).
- Infrastructure: CoolVDS KVM instances (Pure SSD).
The Workflow in Action
Let's look at a practical example. You need to deploy a PHP application and ensure Nginx is configured correctly. Instead of running commands manually, you define an Ansible Playbook.
Here is a snippet of what your site.yml should look like:
- name: Ensure Nginx is installed and bleeding edge
apt:
name: nginx
state: latest
update_cache: yes
- name: Push Nginx Configuration
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
mode: 0644
notify: restart nginx
- name: Ensure Application Directory Exists
file:
path: /var/www/html/myapp
state: directory
owner: www-data
group: www-data
When you push this code to your `master` branch, a webhook triggers Jenkins. Jenkins spins up a worker, connects to your CoolVDS instance via SSH keys, and runs ansible-playbook. The result? Identical configuration, every single time.
Pro Tip: Network latency kills deployment speed. If your dev team is in Oslo or Bergen, hosting your Git repositories and CI servers in US-East will slow down your pipelines. CoolVDS peers directly at NIX (Norwegian Internet Exchange), ensuring your push-to-deploy latency is measured in single-digit milliseconds.
Why Hardware Matters for Automation
Many developers ignore the underlying hardware when designing these workflows. They shouldn't. Automated deployments often involve high I/O operations: compiling assets, unzipping artifacts, and restarting services.
On a budget VPS with spinning Rust (HDD) or oversold shared storage, a simple `apt-get upgrade` can hang due to I/O wait (iowait). This causes CI pipelines to time out and fail.
This is where NVMe storage enters the conversation. Although still a premium technology in 2015, CoolVDS is aggressively rolling out NVMe-backed storage tiers. The difference in random read/write speeds compared to standard SATA SSDs is drastic. When your Ansible script tries to copy 10,000 small PHP files, NVMe ensures the operation finishes in seconds, not minutes.
Security and Compliance: The Norwegian Context
Automating deployment isn't just about speed; it's about governance. With strict data privacy regulations in the EU and specific Norwegian mandates (Personopplysningsloven), you need to know exactly who changed what and when.
A Git-centric workflow provides an automatic audit trail. Every change to your infrastructure is logged in the commit history. "Who opened port 22 to the world?" check the git blame.
Furthermore, by hosting on CoolVDS, you ensure data residency remains within Norway. This is critical for compliance with the Data Protection Directive. We combine this with hardware-level DDoS protection to ensure that your automated pipelines aren't disrupted by malicious traffic.
Transitioning from Manual to Automated
You don't need to rewrite your entire infrastructure overnight. Start small:
- Audit: Document every manual step you take to set up a server.
- Script: Turn those steps into a simple Bash script or Ansible playbook.
- Verify: Spin up a fresh CoolVDS instance and run the script. Does it work?
- Automate: Hook it up to Jenkins.
Stop treating your servers like pets. If a server acts up, you should be able to terminate it and let your code provision a new one automatically. That is the power of modern managed hosting and code-driven infrastructure.
Ready to build a pipeline that actually works? Deploy a high-performance KVM instance on CoolVDS today and stop fearing the 3:00 AM pager.