Console Login

Stop Touching Production: The Case for Git-Driven Infrastructure in 2016

Stop Touching Production: The Case for Git-Driven Infrastructure

If you are currently SSH-ing into your production server to edit an nginx.conf file with vim, you are part of the problem. It is 2016. We have the tools to stop behaving like cowboys.

I have seen it happen too many times. A "quick fix" on a live server in Oslo works for three weeks. Then the server reboots, the manual change is lost because it wasn't in the configuration management scripts, and the frontend crashes. The client calls you at 3 AM. The logs are empty because you bypassed the logging daemon.

The industry is shifting toward what some are calling "Operations by Pull Request" or Infrastructure as Code (IaC). The goal is simple: Git is the single source of truth. If it is not in the repo, it does not exist in production.

The Architecture of a Git-Centric Workflow

Forget the "push to deploy" button on cheap shared hosting. We are talking about a pipeline that enforces consistency. In a standard setup for a Norwegian dev team subject to strict data handling requirements (thanks, Datatilsynet), you need an audit trail. Git provides that automatically.

Here is the stack that actually works in production today, June 2016:

  • Version Control: Git (Self-hosted GitLab or Bitbucket).
  • CI/CD: Jenkins 2.0 (The new Pipeline plugin is essential).
  • Configuration Management: Ansible 2.1.
  • Runtime: Docker Engine 1.11 or pure KVM.

1. The Commit as the Trigger

Nothing changes on the server until code is merged into the master branch. This requires discipline. You need a Jenkinsfile in the root of your project. With Jenkins 2.0 released just a few months ago, we finally have decent "Pipeline as Code" support.

Here is a battle-tested Jenkinsfile structure for a Dockerized application:

node {
    stage('Checkout') {
        checkout scm
    }
    
    stage('Build') {
        // Building the Docker image
        sh 'docker build -t myapp:${env.BUILD_NUMBER} .'
    }

    stage('Test') {
        // Run unit tests inside the container
        try {
            sh 'docker run --rm myapp:${env.BUILD_NUMBER} ./run_tests.sh'
        } catch (err) {
            currentBuild.result = 'FAILURE'
            throw err
        }
    }

    stage('Deploy to Staging') {
        // Ansible does the heavy lifting
        sh 'ansible-playbook -i staging.inventory deploy.yml --extra-vars "build_version=${env.BUILD_NUMBER}"'
    }
}

2. The Ansible Glue

Shell scripts are brittle. Ansible is idempotent. If you run a script twice, it might break things. If you run an Ansible playbook twice, it ensures the state remains correct. This is critical for VPS Norway environments where stability is paramount.

Do not just copy files. Use the template module to inject environment variables safely. Here is how we update a web service without downtime using a rolling update strategy in Ansible:

---
- hosts: webservers
  serial: 1
  become: yes
  
  tasks:
    - name: Pull the new Docker image
      command: "docker pull registry.internal/myapp:{{ build_version }}"

    - name: Stop the running container
      command: "docker stop myapp_web"
      ignore_errors: yes

    - name: Remove the old container
      command: "docker rm myapp_web"
      ignore_errors: yes

    - name: Start the new container
      command: >
        docker run -d 
        --name myapp_web 
        -p 80:8080 
        --restart always 
        -v /var/log/myapp:/var/log/myapp 
        registry.internal/myapp:{{ build_version }}

The Hardware Bottleneck: Why I/O Matters

This workflow sounds great until you try to run it on a budget VPS with magnetic spinning disks. I recently tried to deploy a Jenkins build agent on a cheap competitor's instance. The build timed out because disk I/O was saturated during the npm install phase.

Pro Tip: CI/CD pipelines are I/O intensive. You are creating, writing, and deleting thousands of small files (node_modules, docker layers) in seconds. High IOPS (Input/Output Operations Per Second) is not a luxury; it is a requirement.

This is where CoolVDS becomes the reference implementation for this workflow. We use enterprise-grade NVMe storage standard. When you are running Ansible tasks across ten servers, you cannot afford to have the control node waiting on disk latency. Our benchmarks show that NVMe-backed instances reduce Docker image build times by up to 40% compared to standard SSDs.

Security and the "Norwegian Factor"

With the invalidation of Safe Harbor last year, keeping data within the EEA (European Economic Area) is critical for compliance. If you are deploying to a cloud server, you need to know exactly where that physical machine sits.

By treating infrastructure as code, you also treat security as code. Your firewall rules (iptables or UFW) should be in that Git repo. Your SSH keys? Managed by Ansible, not manually pasted into authorized_keys.

Here is a quick snippet to harden your SSH config automatically via Ansible, ensuring only your CI server and key admins can connect:

- name: Secure SSH configuration
  lineinfile:
    dest: /etc/ssh/sshd_config
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
    state: present
  with_items:
    - { regexp: '^PasswordAuthentication', line: 'PasswordAuthentication no' }
    - { regexp: '^PermitRootLogin', line: 'PermitRootLogin no' }
    - { regexp: '^Port', line: 'Port 2222' }
  notify: restart ssh

The Immutable Future

We are moving toward immutable infrastructure. In 2016, we are still transitioning. We still patch servers occasionally. But the goal is to never patch. If a server is broken, kill it and spin up a new one from the code in Git.

To do this, you need a hosting partner that offers fast provisioning. If it takes 20 minutes to provision a new VPS, immutability is painful. On CoolVDS, KVM instances spin up in under 60 seconds. That speed enables true automated recovery.

Next Steps

Stop making manual changes. Today. Take your current server configuration, write it into an Ansible playbook, and push it to a private Git repository. Then, get a build server that doesn't choke on I/O.

Need a sandbox to test your new pipeline? Deploy a high-performance NVMe instance on CoolVDS today and see what 100% predictable I/O does for your build times.