Console Login
Home / Blog / DevOps & Infrastructure / Stop SSH-ing into Production: The Git-Driven Infrastructure Workflow
DevOps & Infrastructure 0 views

Stop SSH-ing into Production: The Git-Driven Infrastructure Workflow

@

Stop SSH-ing into Production: The Git-Driven Infrastructure Workflow

It is 3:00 AM on a Tuesday. Your monitoring system is screaming because the load balancer just decided to reject connections. Why? Because three weeks ago, a junior developer SSH'd into the live server and manually tweaked /etc/nginx/nginx.conf to fix a "temporary" issue. That change was never documented, never versioned, and just got overwritten by your latest automated deployment script.

If this scenario sounds familiar, your workflow is broken. In 2015, there is absolutely no excuse for maintaining servers by hand. The era of the "Pet" server is over; we are managing Cattle now.

At CoolVDS, we see this daily. Customers migrate to our high-performance KVM instances but bring their fragile, manual workflows with them. Today, we are discussing the Git-Centric Workflow (what some industry leaders are starting to call "Operations by Pull Request"). This is how you stabilize your infrastructure, reduce Total Cost of Ownership (TCO), and sleep through the night.

The Single Source of Truth

The core philosophy is simple: If it is not in Git, it does not exist.

Your server configuration, your firewall rules, and your application binaries must be defined in code. When you need to increase the worker_connections in Nginx or change a MySQL buffer pool size, you don't use vim on the server. You edit a file in your repository, commit it, and let a machine apply the change.

The Stack: Git, Jenkins, and Ansible

While Docker is gaining massive traction (version 1.7 just dropped), many production environments in Europe still rely on robust configuration management for the host OS. Here is the battle-tested setup we recommend for deploying to a VPS in Norway:

  1. Version Control: Git (hosted on GitLab or GitHub).
  2. CI Server: Jenkins (the workhorse).
  3. Configuration Management: Ansible (agentless and simple).
  4. Infrastructure: CoolVDS KVM instances (Pure SSD).

The Workflow in Action

Let's look at a practical example. You need to deploy a PHP application and ensure Nginx is configured correctly. Instead of running commands manually, you define an Ansible Playbook.

Here is a snippet of what your site.yml should look like:

- name: Ensure Nginx is installed and bleeding edge
  apt:
    name: nginx
    state: latest
    update_cache: yes

- name: Push Nginx Configuration
  template:
    src: templates/nginx.conf.j2
    dest: /etc/nginx/nginx.conf
    mode: 0644
  notify: restart nginx

- name: Ensure Application Directory Exists
  file:
    path: /var/www/html/myapp
    state: directory
    owner: www-data
    group: www-data

When you push this code to your `master` branch, a webhook triggers Jenkins. Jenkins spins up a worker, connects to your CoolVDS instance via SSH keys, and runs ansible-playbook. The result? Identical configuration, every single time.

Pro Tip: Network latency kills deployment speed. If your dev team is in Oslo or Bergen, hosting your Git repositories and CI servers in US-East will slow down your pipelines. CoolVDS peers directly at NIX (Norwegian Internet Exchange), ensuring your push-to-deploy latency is measured in single-digit milliseconds.

Why Hardware Matters for Automation

Many developers ignore the underlying hardware when designing these workflows. They shouldn't. Automated deployments often involve high I/O operations: compiling assets, unzipping artifacts, and restarting services.

On a budget VPS with spinning Rust (HDD) or oversold shared storage, a simple `apt-get upgrade` can hang due to I/O wait (iowait). This causes CI pipelines to time out and fail.

This is where NVMe storage enters the conversation. Although still a premium technology in 2015, CoolVDS is aggressively rolling out NVMe-backed storage tiers. The difference in random read/write speeds compared to standard SATA SSDs is drastic. When your Ansible script tries to copy 10,000 small PHP files, NVMe ensures the operation finishes in seconds, not minutes.

Security and Compliance: The Norwegian Context

Automating deployment isn't just about speed; it's about governance. With strict data privacy regulations in the EU and specific Norwegian mandates (Personopplysningsloven), you need to know exactly who changed what and when.

A Git-centric workflow provides an automatic audit trail. Every change to your infrastructure is logged in the commit history. "Who opened port 22 to the world?" check the git blame.

Furthermore, by hosting on CoolVDS, you ensure data residency remains within Norway. This is critical for compliance with the Data Protection Directive. We combine this with hardware-level DDoS protection to ensure that your automated pipelines aren't disrupted by malicious traffic.

Transitioning from Manual to Automated

You don't need to rewrite your entire infrastructure overnight. Start small:

  1. Audit: Document every manual step you take to set up a server.
  2. Script: Turn those steps into a simple Bash script or Ansible playbook.
  3. Verify: Spin up a fresh CoolVDS instance and run the script. Does it work?
  4. Automate: Hook it up to Jenkins.

Stop treating your servers like pets. If a server acts up, you should be able to terminate it and let your code provision a new one automatically. That is the power of modern managed hosting and code-driven infrastructure.

Ready to build a pipeline that actually works? Deploy a high-performance KVM instance on CoolVDS today and stop fearing the 3:00 AM pager.

/// TAGS

/// RELATED POSTS

Building a CI/CD Pipeline on CoolVDS

Step-by-step guide to setting up a modern CI/CD pipeline using Firecracker MicroVMs....

Read More →

Latency is the Enemy: Why Centralized Architectures Fail Norwegian Users (And How to Fix It)

In 2015, hosting in Frankfurt isn't enough. We explore practical strategies for distributed infrastr...

Read More →

Docker in Production: Security Survival Guide for the Paranoia-Prone

Containerization is sweeping through Norwegian dev teams, but the default settings are a security ni...

Read More →

Stop Using Ping: A Sysadmin’s Guide to Infrastructure Monitoring at Scale

Is your monitoring strategy just a cron job and a prayer? In 2015, 'uptime' isn't enough. We explore...

Read More →

The Truth About "Slow": A SysAdmin’s Guide to Application Performance Monitoring in 2015

Uptime isn't enough. Discover how to diagnose high latency, banish I/O wait time, and why KVM virtua...

Read More →

The CTO’s Guide to Cloud Economics: Reducing TCO Without Choking I/O in Norway

Is your monthly infrastructure bill scaling faster than your user base? We dissect the hidden costs ...

Read More →
← Back to All Posts