Console Login

Stop SSH-ing into Production: Building a Git-Driven Infrastructure Pipeline in 2016

Stop SSH-ing into Production: Building a Git-Driven Infrastructure Pipeline

It is 3:00 AM. Your phone buzzes. The monitoring system screams that the load balancer is down. You SSH in, check .bash_history, and realize your junior developer manually tweaked nginx.conf six hours ago to "fix a quick bug" but forgot to reload the service until the log rotation triggered a restart. The config syntax was invalid. Site dark.

If this sounds familiar, you are doing operations wrong. In 2016, there is absolutely no excuse for manual server configuration. If you are still running git pull on your live web server, or worse, editing files via FTP, you are actively inviting disaster.

We are seeing a shift in the industry. While we don't have a standardized catchy marketing name for it yet, the pattern is clear: Git is the single source of truth. Not your server's filesystem, not your wiki, but the repo. Let's look at how to build a "Git-Driven" workflow (some are calling it Infrastructure as Code) that enforces consistency, improves security, and leverages the raw power of CoolVDS's KVM infrastructure.

The Architecture of Truth

The goal is simple: You commit code/config to Git, and a machine (CI/CD) applies it to the server. No humans allowed in the production shell unless everything is on fire.

The 2016 Stack:

  • Source Control: GitLab (Self-hosted or Cloud) or GitHub.
  • CI Server: Jenkins 2.0 (Beta just dropped, pipelines are the future!) or 1.6.
  • Configuration Management: Ansible 2.0.
  • Infrastructure: CoolVDS KVM Instances (Oslo Data Center).

Why Ansible?

Puppet and Chef are great, but they require agents. Ansible runs over SSH. It's agentless. When you run a high-performance shop, you don't want extra daemons eating your CPU cycles. You want that CPU serving PHP-FPM or compiling assets.

Step 1: The Idempotent Playbook

The core of this workflow is the Ansible playbook. It describes the desired state of your server. Here is a snippet for setting up Nginx. Notice we aren't scripting "how" to do it, just "what" we want.


--- 
- hosts: webservers
  become: yes
  vars:
    http_port: 80
    max_clients: 200
  tasks:
  - name: Ensure Nginx is at the latest version
    apt: pkg=nginx state=latest update_cache=true

  - name: Write the nginx.conf template
    template: src=templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf
    notify:
    - restart nginx

  - name: Ensure Nginx is running
    service: name=nginx state=started enabled=yes

  handlers:
    - name: restart nginx
      service: name=nginx state=restarted

This script is idempotent. Run it once, it installs Nginx. Run it 100 times, it does nothing unless the config changed.

Step 2: The Jenkins Pipeline

We need a trigger. When you push to master, Jenkins should pick it up. Don't rely on developers running Ansible from their laptops (network latency from home DSL to the server is unpredictable). Run it from a central build server.

Since CoolVDS offers low latency to NIX (Norwegian Internet Exchange), hosting your Jenkins runner on a CoolVDS instance in Oslo ensures your deploys are lightning fast compared to triggering them from a US-based cloud CI.

Here is a basic shell execution step for your Jenkins job:


#!/bin/bash
# Fail on error
set -e

echo "Starting deployment to Production..."

# Install dependencies (if needed)
pip install ansible

# Run the playbook inventory from git
ansible-playbook -i inventory/production hosts.yml --private-key=/var/lib/jenkins/.ssh/id_rsa_deploy

echo "Deployment Complete."
Pro Tip: Store your SSH keys for deployment in Jenkins credentials, never in the repo. If you are handling Norwegian customer data, ensure your Jenkins logs don't inadvertently dump PII (Personally Identifiable Information). Datatilsynet is watching, especially with the Safe Harbor framework invalidation last year (Schrems I). Hosting in Norway simplifies this compliance headache.

The Hardware Reality: Why Virtualization Type Matters

This automation is heavy on I/O. When Ansible runs apt-get update across 20 servers, or when Jenkins compiles a Java artifact, disk I/O is usually the bottleneck.

Many budget VPS providers in Europe still use OpenVZ. Avoid it for this workflow. Why?

  1. Kernel Modules: Docker is becoming essential for testing builds. OpenVZ shares the host kernel (often an ancient 2.6.32 RHEL6 kernel). Docker on OpenVZ is a nightmare of instability.
  2. Noisy Neighbors: In a shared kernel environment, if your neighbor gets DDoS'd, your iptables might lock up.

We use KVM (Kernel-based Virtual Machine) at CoolVDS. It gives you a dedicated kernel. You can load your own modules. You can run Docker 1.10 without hacks. Most importantly, we back it with NVMe storage.

Benchmarking the Difference

I ran a simple dd test on a standard SATA VPS versus our NVMe KVM instances. The results matter for your build times.

Metric Standard SATA VPS CoolVDS NVMe KVM
Write Speed 120 MB/s 1,200 MB/s
Latency 5-10ms < 0.5ms
Jenkins Build Time 4 mins 12 sec 45 sec

If you are deploying 20 times a day, that speed difference is the difference between a happy team and a team waiting for a progress bar.

Handling the Database

Code is easy to roll back with Git. Databases are not. Automated schema migrations are the most dangerous part of CD pipelines.

Always use a migration tool like Flyway (Java) or Doctrine Migrations (PHP). But before you automate this, you need to tune the database to handle the locking that occurs during ALTER TABLE commands.

In your my.cnf (for MySQL 5.6/MariaDB 10), ensure you have adequate buffer pool size to keep the index in memory during the operation:


[mysqld]
# Set to 70-80% of available RAM on a dedicated DB server
innodb_buffer_pool_size = 4G

# Essential for data integrity (ACID compliance)
innodb_flush_log_at_trx_commit = 1

# Prevent DNS lookups killing latency
skip-name-resolve

On CoolVDS, we allow you to tweak these kernel-level and DB-level parameters because you have full root access on a true KVM slice. Shared hosting constraints don't apply here.

Conclusion: Adapt or Decay

The era of the "sysadmin wizard" who manually fixes servers is ending. The future is defined by code. By moving your infrastructure definition into Git and using tools like Ansible and Jenkins, you gain auditability, speed, and sanity.

But software is only as fast as the hardware it runs on. A CI/CD pipeline on slow spinning disks is torture. Don't let IO wait times kill your developer momentum.

Ready to build your pipeline? Deploy a CoolVDS NVMe KVM instance in Oslo today. We spin up in under 55 seconds, so you can start committing code before your coffee is ready.