Console Login

Stop FTPing Your Production Code: The Git-Driven Deployment Workflow

Stop FTPing Your Production Code: The Git-Driven Deployment Workflow

It is July 2013. We have self-driving cars on the horizon and supercomputers in our pockets, yet I still see Senior Developers dragging index.php from a local folder to a production server using FileZilla. This has to stop.

If you are managing infrastructure in Norway—whether it is a high-traffic media site in Oslo or a Magento store in Trondheim—manual deployments are a liability. I have seen servers melt not because of traffic spikes, but because a tired sysadmin uploaded a configuration file with a missing semicolon at 2:00 AM.

The solution is not to "be more careful." The solution is to remove the human element entirely. We call this Infrastructure as Code and Git-Driven Deployment. In this guide, we are going to build a pipeline where a simple git push triggers a deployment, updates your configuration, and restarts services, all within seconds. No FTP. No SSH editing. No downtime.

The Architecture of Trust

Before we touch the terminal, let's talk about the underlying metal. Automation requires a predictable environment. Many budget hosts in Europe are still pushing OpenVZ containers. The problem? Kernel resource sharing. If your neighbor decides to compile a massive kernel module, your deployment script hangs. This is unacceptable for professional workflows.

At CoolVDS, we strictly use KVM (Kernel-based Virtual Machine) virtualization. This gives you a dedicated kernel and reserved RAM. When you run a Puppet agent or a heavy Git checkout, the IOPS are yours. Combined with our Pure SSD storage arrays, this ensures your build pipeline doesn't choke on disk I/O.

The "Git-Push" Workflow

Here is the concept: Your CoolVDS server acts as the central repository. When you push to it, a post-receive hook fires. This hook checks out the code into the live web directory and runs any necessary build steps (like cache clearing or restarting Nginx).

Step 1: The Remote Setup

SSH into your CoolVDS instance. We need to create a "bare" repository. This acts as the hub.

# On your CoolVDS Server
sudo useradd -m deployer
sudo passwd deployer
# (Set a strong password)

# Switch to the user
su - deployer

# Create the bare repo
mkdir -p ~/repos/project.git
cd ~/repos/project.git
git init --bare

Now, we create the magic hook. This script executes every time you push code.

# Inside ~/repos/project.git/hooks/post-receive

#!/bin/bash
TARGET="/var/www/html"
GIT_DIR="/home/deployer/repos/project.git"

echo "------------------------------------"
echo " Received Push. Deploying to Production..."

# Checkout the code to the web directory
git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f

# Fix permissions (crucial for PHP/Apache/Nginx)
chmod -R 755 $TARGET

# Reload Nginx to apply any config changes
# Note: Ideally, use sudo with NOPASSWD for specific commands
sudo /etc/init.d/nginx reload

echo " Deployment Complete at $(date)"
echo "------------------------------------"

Make sure the script is executable:

chmod +x hooks/post-receive

Step 2: Automating Configuration (Puppet)

Moving files is easy. Managing state is hard. What if your deployment requires a specific PHP module? Do not install it manually. Use Puppet. This ensures that if disaster strikes and you need to rebuild your CoolVDS node, you can do it in minutes, not hours.

Here is a basic manifest.pp that ensures Nginx is running and configured correctly. You can include this in your git repo and trigger puppet apply in your post-receive hook.

# /etc/puppet/manifests/site.pp

node 'default' {
  package { 'nginx':
    ensure => installed,
  }

  service { 'nginx':
    ensure     => running,
    enable     => true,
    hasrestart => true,
    require    => Package['nginx'],
  }

  file { '/etc/nginx/nginx.conf':
    ensure  => present,
    source  => '/var/www/html/config/nginx.conf',
    notify  => Service['nginx'],
    require => Package['nginx'],
  }
  
  # Optimizing for high traffic on CoolVDS SSDs
  file { '/etc/nginx/conf.d/buffers.conf':
    content => "client_body_buffer_size 10K;\nclient_header_buffer_size 1k;\nclient_max_body_size 8m;\nlarge_client_header_buffers 2 1k;\n",
    notify  => Service['nginx'],
  }
}

This approach transforms your infrastructure into documentation. Your nginx.conf lives in Git, not just in /etc/.

Step 3: Database & Performance Tuning

Automating code is half the battle. Your database config must match your hardware. Since CoolVDS provides high-performance SSDs, we can tune MySQL to be aggressive.

In your my.cnf (which should also be managed by Puppet/Chef), avoid the defaults. The defaults assume you are running on a spinning hard drive from 2005.

[mysqld]
# InnoDB settings for SSD
innodb_flush_neighbors = 0
innodb_io_capacity = 2000

# Memory allocation (Assuming a 4GB RAM CoolVDS instance)
innodb_buffer_pool_size = 2G
innodb_log_file_size = 256M

# Query Cache (Be careful with high write environments)
query_cache_size = 64M
query_cache_type = 1
Pro Tip: If you are serving customers in Norway, latency is the silent killer. Ensure your server is physically located in a datacenter with direct peering to NIX (Norwegian Internet Exchange). CoolVDS infrastructure is optimized for Nordic routing, keeping ping times to Oslo under 10ms.

The "Works on My Machine" Trap

I recall a project last year for a logistic company in Bergen. The developer used a Mac with MAMP, and the production server was CentOS 6. They manually uploaded files. The application crashed immediately because of case-sensitivity issues in the filesystem (HFS+ is case-insensitive, ext4 is not).

By using a Git-driven workflow, you can push to a "Staging" remote first. This remote should be an exact mirror of production (easy to spin up with CoolVDS clones). If the hook runs successfully there, you push to production. This creates a safety valve that manual FTP lacks.

Legal & Compliance (Datatilsynet)

While we don't have the heavy GDPR hammer yet, the Norwegian Personal Data Act (Personopplysningsloven) is strict. By automating your deployments and keeping configurations in Git, you create an audit trail. You know exactly who changed the firewall rules and when. If Datatilsynet ever knocks on your door asking about data integrity, "I dragged and dropped it" is not a valid legal defense.

Conclusion

The era of the "Cowboy Sysadmin" is ending. Professional hosting demands professional workflows. By leveraging Git hooks, Puppet manifests, and high-performance KVM architecture, you stop fixing servers and start building value.

You need a platform that keeps up with your automation. Don't let slow I/O bottleneck your deployment scripts. Spin up a CoolVDS SSD instance today, set up your remote, and experience the joy of git push production master.