Stop SSH-ing into Production: Mastering Git-Centric Infrastructure in 2016
It is 2 AM. Your pager is screaming because the database migration script failed halfway through, and the developer who wrote it is on a plane. You log into the server, and what do you find? Someone manually edited the my.cnf file three weeks ago and didn't commit the change to the repository. The new deployment overwrote the manual "hotfix," and now your I/O wait is hitting 80%.
If this sounds familiar, your workflow is broken. In the high-stakes world of systems architecture, manual intervention is the enemy of stability. While the buzzword crowds in Silicon Valley are talking about "serverless," those of us managing real heavy-lifting infrastructure in Europe know the truth: Immutable Infrastructure managed via Git is the only way forward.
In this guide, we are going to tear down the manual deployment mindset and replace it with a rigorous, automated pipeline. We will use tools available right now—Docker 1.12, Jenkins 2.0, and Ansible—to build a workflow where git push is the only command you need to deploy.
The Philosophy: Infrastructure as Code (IaC)
The concept is simple but brutal to implement: If it isn't in Git, it doesn't exist.
Whether you are running a Magento cluster for a Norwegian retailer or a high-frequency trading bot, your server state must be declarative. We are seeing a massive shift towards treating infrastructure definitions (Terraform, Ansible playbooks) exactly like application code. This practice, often called "Operations by Pull Request," ensures that every change is audited, tested, and reversible.
Pro Tip: Using containers? Don't rely onlatesttags in production. In October 2016, the Docker ecosystem is moving fast. Pin your image versions (e.g.,node:6.7.0-alpine) to ensure your builds are reproducible. A "latest" tag pulling a new underlying OS update can silently break your glibc dependencies.
The Toolchain: Jenkins Pipelines & Docker
Forget the old freestyle Jenkins jobs. With the release of Jenkins 2.0 earlier this year, Pipelines as Code (Jenkinsfile) has become the standard. This allows us to store our build logic right alongside our source code.
Here is a battle-tested Jenkinsfile structure we use for deploying microservices to CoolVDS instances:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'docker build -t registry.coolvds.com/myapp:${env.BUILD_NUMBER} .'
}
}
stage('Test') {
steps {
sh 'docker run --rm registry.coolvds.com/myapp:${env.BUILD_NUMBER} npm test'
}
}
stage('Deploy to Staging') {
steps {
sh 'ansible-playbook -i inventory/staging deploy.yml --extra-vars "version=${env.BUILD_NUMBER}"'
}
}
}
}
Why KVM Matters for CI/CD
Running these pipelines requires serious I/O performance. When you are building Docker images, you are essentially hammering the disk with thousands of small writes. This is where cheap VPS providers fail. They oversell their storage, and your build times creep up from 2 minutes to 20 minutes due to "noisy neighbors."
At CoolVDS, we strictly use KVM (Kernel-based Virtual Machine) with pure NVMe storage. KVM provides full hardware virtualization, meaning your resources are guaranteed. Unlike OpenVZ containers, which share a kernel and often suffer from CPU stealing, a KVM instance on CoolVDS acts like a dedicated server. For a CI/CD runner, this latency difference is the bottleneck between a smooth deploy and a timeout error.
Configuration Management with Ansible 2.1
While Docker handles the application runtime, you still need to manage the host OS, security patches, and firewalls. Ansible 2.1 (released May 2016) is the tool of choice here because it is agentless. You don't need to install a daemon on your production servers; you just need SSH.
Here is a snippet of how we harden a CentOS 7 server automatically before it ever accepts traffic:
---
- hosts: webservers
become: yes
tasks:
- name: Install Nginx Mainline
yum:
name: nginx
state: present
- name: Configure Sysctl for High Concurrency
sysctl:
name: "{{ item.key }}"
value: "{{ item.value }}"
state: present
reload: yes
with_dict:
net.ipv4.ip_local_port_range: "1024 65535"
net.ipv4.tcp_tw_reuse: 1
net.core.somaxconn: 4096
- name: Ensure Firewall restricts SSH
firewalld:
service: ssh
permanent: yes
state: disabled
# Only allow VPN IP
source: 10.8.0.0/24
zone: public
Data Sovereignty and The "Schrems" Fallout
We cannot talk about infrastructure in 2016 without addressing the legal elephant in the room. The invalidation of Safe Harbor last year and the new Privacy Shield framework have made data location critical. If you are hosting customer data for Norwegian clients, relying on US-based cloud giants is becoming a compliance minefield.
Datatilsynet (The Norwegian Data Protection Authority) is watching closely. By hosting on CoolVDS, your data resides physically in Oslo or our European datacenters, governed by Norwegian law. This isn't just about speed—though our <2ms latency to NIX (Norwegian Internet Exchange) is nice—it's about legal risk mitigation.
Performance Tuning: The Nginx Frontier
Automating the deployment is only step one. The configuration you deploy must be performant. A common mistake we see is leaving the default nginx.conf untouched. If you are running on a multi-core CoolVDS instance, you are wasting cycles.
Optimize your worker processes and file descriptor limits:
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 8192;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Adjust buffers for large headers
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;
}
The Verdict: Automate or Die
The days of the "Cowboy Sysadmin" are over. In late 2016, complexity is too high to manage manually. By adopting a Git-centric workflow, you gain auditability, speed, and sanity.
But remember: Software cannot fix bad hardware. You can have the most beautiful Ansible playbooks in the world, but if your underlying storage is spinning rust or oversold SSDs, your application will lag. High-performance automation demands high-performance infrastructure.
Don't let I/O wait kill your CI pipeline. Spin up a CoolVDS KVM instance with NVMe storage today and experience what sub-millisecond latency does for your deployment speeds.