Console Login

Stop FTPing Your Production: Mastering Git-Based Deployment Workflows in 2013

Stop FTPing Your Production: Mastering Git-Based Deployment Workflows

If you are still using FileZilla to drag-and-drop PHP files into your /var/www/html directory in 2013, you are a liability. I’ve said it. It’s harsh, but after spending the last decade cleaning up after "cowboy coding" disasters, I have zero tolerance for manual deployments. We are seeing a massive shift right now. The industry is moving away from the "pet" server model toward something more automated, more reproducible. While some call it "Continuous Delivery," I prefer to think of it as simply sane systems administration.

Last month, I audited a setup for a client in Oslo. They were running a high-traffic Magento store. Their deployment strategy? A senior developer SSH-ing into the box and running svn update on the live production folder. One Friday afternoon, a conflict marker (<<<<<<< HEAD) made it into a config file. The site went white-screen for three hours. The cache hid it from the dev, but not from the customers. They lost thousands of Kroner.

The solution is not more careful typing. The solution is treating your infrastructure and your deployment logic as code, stored in Git, and executed automatically. Here is how you build a bulletproof deployment pipeline using tools available today.

The Architecture of a Git-Centric Workflow

The concept is simple: Git is the single source of truth. If it isn't in the repository, it doesn't exist on the server. We aren't just talking about application code; we are talking about server configuration via Puppet or Chef. By combining a Version Control System (VCS) with a Continuous Integration (CI) server like Jenkins, we eliminate human error.

1. The "Push-to-Deploy" Model

For smaller setups or simple apps, a Git post-receive hook is the most efficient method. It’s lightweight and requires nothing but SSH access and Git installed on your VPS. This is perfect for the low-latency requirements we see in the Nordic market, where minimizing the time between "commit" and "live" is crucial for rapid iteration.

Here is a robust post-receive hook script. Place this in your bare repository on the server inside hooks/:

#!/bin/bash
# /var/git/project.git/hooks/post-receive

TARGET="/var/www/production"
GIT_DIR="/var/git/project.git"
BRANCH="master"

while read oldrev newrev ref
do
    # Only deploy if master branch is pushed
    if [[ $ref =~ .*/$BRANCH$ ]]; then
        echo "Deploying to production..."
        
        # Create directory if it doesn't exist
        mkdir -p $TARGET
        
        # Check out the files
        git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH
        
        # Set permissions (crucial for PHP/Apache)
        chown -R www-data:www-data $TARGET
        chmod -R 755 $TARGET
        
        # Restart service if necessary (be careful with this on high load)
        service php5-fpm reload
        
        echo "Deployment complete."
    fi
done

Note: Always ensure your bare repository doesn't contain the working tree files to save inode usage on your VPS.

2. The Heavy Lifter: Jenkins & Capistrano

Direct hooks are fine for simple sites, but for complex applications requiring database migrations or asset compilation, you need an orchestrator. Jenkins has matured significantly by version 1.5, and with the Git plugin, it becomes the heart of your pipeline.

Instead of deploying directly, Jenkins watches your repo. When a change is detected, it runs unit tests (PHPUnit, RSpec). Only if tests pass does it trigger the deployment tool. In the Ruby and Rails world (and increasingly PHP via Capistrano-ish forks), Capistrano is the gold standard.

Here is a standard config/deploy.rb configuration for a Capistrano setup targeting a CoolVDS instance:

set :application, "norway_store"
set :repository,  "git@github.com:company/repo.git"

set :scm, :git
set :branch, "master"
set :deploy_via, :remote_cache

# Role configuration
role :web, "192.0.2.10"                          # Your HTTP server
role :app, "192.0.2.10"                          # This may be the same server
role :db,  "192.0.2.10", :primary => true        # This is where migrations run

set :user, "deploy"
set :use_sudo, false
set :deploy_to, "/var/www/apps/#{application}"

# Symlink shared configs (database.yml) to avoid committing secrets
after "deploy:update_code", "deploy:symlink_shared"

namespace :deploy do
  task :symlink_shared do
    run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml"
  end

  task :restart, :roles => :app, :except => { :no_release => true } do
    run "sudo service apache2 reload"
  end
end
Pro Tip: Use the :remote_cache strategy. It keeps a cached copy of the repo on the server and only fetches the delta. On a standard VPS connection, this reduces deployment time from minutes to seconds.

Infrastructure as Code: Puppet

Deploying the code is half the battle. Configuring the server is the other half. If your VPS dies today, how long until you are back online? If the answer is "I need to remember which packages I installed," you are doing it wrong.

We use Puppet to define the state of our servers. This ensures that every CoolVDS instance we spin up is identical. It also helps with compliance—essential when dealing with Norwegian privacy standards where data handling must be strictly defined.

A simple Puppet manifest (site.pp) ensures your web server is always running and configured correctly:

node 'web01.oslo.local' {

  package { 'nginx':
    ensure => installed,
  }

  service { 'nginx':
    ensure  => running,
    enable  => true,
    require => Package['nginx'],
  }

  file { '/etc/nginx/sites-available/default':
    ensure  => file,
    content => template('nginx/vhost.erb'),
    notify  => Service['nginx'],
    require => Package['nginx'],
  }

  # Ensure we have a swap file for stability on smaller instances
  exec { 'create_swap':
    command => '/bin/dd if=/dev/zero of=/swapfile bs=1M count=1024 && /sbin/mkswap /swapfile && /sbin/swapon /swapfile',
    unless  => '/sbin/swapon -s | grep /swapfile',
  }
}

The Hardware Reality: Why IOPS Matter

You can have the best deployment scripts in the world, but if the underlying storage system is choking on I/O, your deployment will hang. When you run git checkout or npm install (if you are venturing into Node.js), you are generating thousands of small read/write operations.

This is where the "Noisy Neighbor" effect on cheap shared hosting kills you. If another user on the node is compiling a kernel, your deployment script times out. At CoolVDS, we utilize KVM virtualization to ensure strict resource isolation. Unlike OpenVZ, KVM prevents memory ballooning from stealing your RAM when you need it most—like during a compile step in your deployment.

Comparison: Deployment Methods

Method Rollback Capability Consistency Speed
FTP / SCP Non-existent (Manual) Low (Human error high) Slow (File by file)
SVN Update Low (Revert commits) Medium (Hidden .svn files) Medium
Git + Capistrano Instant (Symlink switch) High (Atomic deploys) Fast (Delta transfer)

Security and Local Compliance

Operating in Norway means respecting data sovereignty. When you push code that handles user data, you must ensure it lands on secure servers. Using a centralized Git server (like a self-hosted GitLab or a private repository) adds a layer of security. Furthermore, by using SSH keys for all deployment communications, we eliminate password brute-force vectors.

Generate a deploy key specifically for your build server. Do not reuse your personal key:

ssh-keygen -t rsa -b 4096 -C "jenkins@buildserver" -f ~/.ssh/deploy_key_rsa

Add the public key to your VPS authorized_keys and restrict it using command options if you are paranoid (which you should be).

Final Thoughts

Automation isn't just about saving time; it's about sleeping at night. When you know your production environment can be rebuilt in minutes using Puppet, and your code can be deployed atomically with Git, you stop fearing the pager.

However, automation requires responsive infrastructure. A script waiting on high I/O wait times is a broken script. If you are serious about implementing these workflows, you need a foundation that respects raw performance. Don't let slow I/O kill your pipeline. Spin up a KVM instance on CoolVDS today and experience the stability of true resource isolation.