Console Login

Stop SSH-ing into Production: The Case for Git-Driven Infrastructure

Stop SSH-ing into Production: The Case for Git-Driven Infrastructure

It is 2013. We have self-driving cars on the horizon and computers in our pockets, yet I still see Senior SysAdmins logging into production servers via SSH, typing vi /etc/nginx/nginx.conf, and restarting the service. They cross their fingers and hope the load balancer doesn't drop connections.

This is madness. It is unprofessional. And if you are running a serious operation targeting the Norwegian market, where uptime is expected to be as reliable as the hydropower grid, it is negligent.

We need to stop treating servers like pets we nurse back to health. We need to treat them like cattle. If a node fails, we shoot it and spin up a new one. This methodology depends entirely on one concept: Infrastructure as Code (IaC) driven by Git. While marketing teams haven't settled on a buzzword for this yet, the technical reality is simple: your repository is the source of truth, not the server's filesystem.

The "Snowflake Server" Problem

When you manually configure a VPS, you create a "snowflake." It is unique. If that server's hard drive fails today, can you recreate it exactly as it was in 15 minutes? Likely not. You'll forget that one sysctl.conf tweak you made three months ago at 2 AM.

In a high-latency environment, or when serving heavy content across Europe, consistency is performance. A Git-centric workflow ensures that every change is versioned, peer-reviewed, and reversible.

The Architecture: Git, Jenkins, and Puppet

Here is the workflow that separates the amateurs from the pros. We don't use FTP. We don't edit files on the server.

  1. Local Dev: You edit code or Puppet manifests locally (using Vagrant for virtualization).
  2. Push: You push to a central Git repository (GitLab or a private bare repo).
  3. CI/CD: A hook triggers Jenkins to run unit tests.
  4. Deploy: If tests pass, the configuration management tool (Puppet or Chef) pulls the changes to the production nodes.

The "Poor Man's" Deploy: Git Hooks

If you aren't ready for a full Puppet master setup, you can start small using Git hooks. This works exceptionally well for simple web apps. By configuring a post-receive hook on your CoolVDS instance, you can automate the deployment instantly.

Navigate to your bare repository on the server and create the hook:

vi hooks/post-receive

Add the following logic. Note the explicit path definitions—cron jobs and hooks often run with empty environments:

#!/bin/bash
TARGET="/var/www/html/production"
GIT_DIR="/var/repo/site.git"
BRANCH="master"

while read oldrev newrev ref
do
    # Only deploy if master branch is pushed
    if [[ $ref =~ .*/$BRANCH$ ]]; then
        echo "Ref $ref received. Deploying ${BRANCH}..."
        git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH
        
        # Fix permissions (crucial for PHP/Apache users)
        chown -R www-data:www-data $TARGET
        
        # Restart Nginx to pick up config changes if any
        # Ideally, test config first: nginx -t
        /usr/sbin/service nginx reload
    else
        echo "Ref $ref received. Doing nothing: only the ${BRANCH} branch may be deployed on this server."
    fi
done

Make it executable:

chmod +x hooks/post-receive

Now, deployment is just git push production master. No SSH login required.

Automating Infrastructure with Puppet

For the operating system configuration itself, scripts are brittle. Use Puppet. Below is a standard manifest for ensuring Nginx is running with optimized worker processes. We explicitly define the worker_processes to match the core count of our CoolVDS instances (typically 4 or 8 cores for high-traffic nodes).

class nginx_server {

  package { 'nginx':
    ensure => installed,
  }

  service { 'nginx':
    ensure  => running,
    enable  => true,
    require => Package['nginx'],
  }

  file { '/etc/nginx/nginx.conf':
    ensure  => present,
    owner   => 'root',
    group   => 'root',
    mode    => '0644',
    # Deploy the template based on CPU count fact
    content => template('nginx/nginx.conf.erb'),
    notify  => Service['nginx'],
  }
}

Inside the template, we dynamically assign worker processes to prevent CPU context switching context drag:

worker_processes <%= @processorcount %>;

Hardware Reality: Why Virtualization Choice Matters

Automation tools like Puppet and Chef are I/O intensive. When you run a Puppet agent, it hashes thousands of files to check for changes. On a budget VPS provider using oversold HDDs, this operation can take minutes, causing "steal time" (CPU wait) to skyrocket.

This is where the underlying technology of your hosting provider becomes critical. Many budget providers use OpenVZ. In OpenVZ, you share the kernel with every other customer on the node. If a "noisy neighbor" decides to compile a kernel or run a heavy heavy cron job, your I/O latency spikes.

Pro Tip: Always check /proc/user_beancounters on OpenVZ systems. If you see 'failcnt' rising, your provider is capping your resources. Move to KVM immediately.

At CoolVDS, we prioritize KVM (Kernel-based Virtual Machine) virtualization. KVM provides full hardware isolation. Your RAM is your RAM. Your kernel is your kernel. Furthermore, by utilizing SSD storage (which is rapidly becoming the standard for serious hosting in 2013), the I/O bottleneck of running automated deployments disappears.

Performance Comparison: Puppet Run Time

Metric Standard HDD VPS (OpenVZ) CoolVDS SSD (KVM)
Puppet Catalog Run 45 - 90 seconds 8 - 12 seconds
Git Checkout (Large Repo) 15 seconds 2 seconds
Service Restart Latency Variable (High Jitter) Consistent

The Norwegian Context: Data Sovereignty

We are operating under the scrutiny of Datatilsynet. While the cloud is global, liability is local. Storing customer data on US-based servers (like AWS US-East) puts you in a legal grey area regarding Safe Harbor agreements.

By hosting on CoolVDS instances physically located in Oslo or nearby European hubs, you reduce latency to the NIX (Norwegian Internet Exchange) to sub-5ms levels and simplify your compliance posture. Automated workflows allow you to pin data location in your configuration, ensuring no developer accidentally spins up a storage bucket in a non-compliant zone.

Conclusion

The era of the "cowboy admin" is over. In 2013, if you cannot recreate your infrastructure with a single command, you do not own your infrastructure; it owns you.

Transitioning to a Git-driven workflow requires an investment in time, but it pays dividends in sleep. However, software automation is useless without the hardware to support it. Don't let I/O wait times throttle your deployment pipeline.

Ready to build a pipeline that doesn't break? Spin up a KVM-based, SSD-powered instance on CoolVDS today and see how fast a git push can really be.