Console Login

Stop Deploying by Hand: The Case for Git-Driven Infrastructure in 2015

Stop Deploying by Hand: The Case for Git-Driven Infrastructure

If you are still dragging and dropping files via FileZilla or—heaven forbid—editing code directly on a production server using vim, you are a ticking time bomb. I’ve been there. It works fine for a hobby site until that one Friday afternoon when a missed semicolon takes down the entire eCommerce checkout, and you have no roll-back strategy other than “undo”.

It is late 2015. We have tools that make manual intervention obsolete. The industry is shifting toward what I call “Infrastructure as Code” (IaC). The premise is simple: if it isn't in Git, it doesn't exist. Your server configuration, your application code, and your deployment logic should all live in version control.

With the recent invalidation of the Safe Harbor agreement by the ECJ just weeks ago, data sovereignty is suddenly the biggest headache for CTOs across Europe. Hosting data outside the EEA is now a legal minefield. This makes the automation of local, Norwegian infrastructure not just a technical luxury, but a compliance necessity.

The Architecture of Automation

The goal is a pipeline where a git push triggers a cascade of automated checks and deployments. We aren't just talking about code; we are talking about server state. Tools like Puppet and Chef have paved the way, but recently I’ve found Ansible to be the most pragmatic choice for teams moving fast. It’s agentless, which means less overhead on your VPS.

Here is the battle-tested stack I am currently deploying for high-availability clients in Oslo:

  • Source Control: GitLab (Self-hosted) or GitHub Private Repos.
  • CI Server: Jenkins (The old warhorse, but robust).
  • Configuration Management: Ansible.
  • Infrastructure: CoolVDS KVM Instances (SSD-backed).

Why “git push” beats “scp”

When you automate, you gain auditability. If a server starts misbehaving at 03:00, you can look at the commit log. Who changed the Nginx config? When? Why? With manual edits, that information is lost in the ether.

Furthermore, reliability requires consistency. A script will execute the exact same commands, in the exact same order, a thousand times in a row. A human administrator will not.

Implementation: The Post-Receive Hook

You don't always need a heavy Jenkins server for smaller projects. A simple Git hook on your VPS can work wonders for immediate deployment. Here is a rudimentary setup for a bare repository hosted on a CoolVDS instance.

Inside your bare repo on the server (/var/repo/site.git/hooks/post-receive):

#!/bin/bash
TARGET="/var/www/html"
GIT_DIR="/var/repo/site.git"
BRANCH="master"

while read oldrev newrev ref
do
    if [[ $ref =~ .*/$BRANCH$ ]];
    then
        echo "Ref $ref received. Deploying ${BRANCH} to production..."
        git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f
        
        # Restart Nginx to apply config changes if any
        # Ensure sudoers allows this without password
        sudo /usr/sbin/service nginx reload
        
        echo "Deployment complete."
    fi
done

Warning: While this works for simple sites, do not use this for complex applications requiring database migrations. For that, you need a proper build server.

The Heavy Lifting: Jenkins and Ansible

For enterprise workloads, I run Jenkins on a dedicated CoolVDS instance. Why dedicated? because Java is memory hungry. You don't want your CI server stealing RAM from your database.

When a developer pushes code, Jenkins detects the change. It then runs an Ansible playbook. This playbook doesn't just copy files; it ensures the state of the server matches your definition.

- hosts: webservers
  tasks:
    - name: Ensure Nginx is at the latest version
      apt: pkg=nginx state=latest
      
    - name: Update web application code
      git: repo=git@github.com:company/app.git dest=/var/www/app version=HEAD
      
    - name: Install dependencies
      command: /var/www/app/composer.phar install
      notify:
        - restart php5-fpm

This is idempotent. If Nginx is already latest, it does nothing. If the code hasn't changed, it does nothing. This efficiency is critical.

Hardware Matters: The I/O Bottleneck

Here is the ugly truth about CI/CD pipelines: they are I/O intensive. npm installs, composer updates, and compiling assets generate thousands of tiny read/write operations. On a standard HDD-based VPS, this will crawl. I have seen builds take 20 minutes solely because the disk couldn't keep up with the small file writes.

This is why we standardized on CoolVDS. They utilize high-performance Enterprise SSDs (and are rolling out newer storage tech that pushes the boundaries of SATA limitations). When your build server has high IOPS, your deployment time drops from minutes to seconds. Time is money.

Pro Tip: Always use KVM virtualization for your build servers. Container technologies like Docker (version 1.8 is looking promising!) are gaining hype, but for pure isolation and kernel control in 2015, KVM is the gold standard. It prevents “noisy neighbors” from stealing your CPU cycles during a critical compile.

Local Latency and NIX

If your target market is Norway, your build pipeline and production servers should be in Norway. Routing traffic through Frankfurt or London adds unnecessary milliseconds. CoolVDS peers directly at NIX (Norwegian Internet Exchange) in Oslo. This means when you push code, the latency between your dev machine and the server is negligible, and your users get snappy response times.

Also, with the Datatilsynet keeping a close eye on data privacy, hosting physically in Oslo simplifies your compliance posture significantly. You know exactly where the bits are stored.

Start Automating Today

Manual deployments are a relic of the past. The tools exist. Jenkins is free. Ansible is free. Git is essential. The only cost is the infrastructure to run it reliably.

Don't let a slow disk or a shared kernel slow down your innovation. Spin up a KVM instance on CoolVDS today and build a pipeline that lets you sleep at night.