The Git-Centric Workflow: Automated Ops & Infrastructure as Code in 2015
It is 3:00 AM. Your pager goes off. The database is locked, the frontend is throwing 502 Bad Gateway errors, and the last deployment was done manually by a junior developer who is currently asleep. If this sounds familiar, you are doing Operations wrong. In 2015, we have the tools to stop this madness.
As a Systems Architect operating in the Nordic high-availability space, I have seen too many companies in Oslo and Bergen treating their servers like pets. They name them, they nurse them, and they manually edit /etc/nginx/nginx.conf via SSH. This ends today. The future is Infrastructure as Code (IaC) driven by Git.
In this guide, I will break down the "Git-Centric" workflow—a methodology where your Git repository is the single source of truth, and automated agents (like Jenkins) synchronize that state to your infrastructure. We will cover the specific toolchain: Jenkins Workflow (Pipeline), Ansible 1.9, and the new Docker 1.9 overlay networks, all running on CoolVDS high-performance KVM instances.
The Philosophy: Operations by Pull Request
The core principle is simple: If it is not in Git, it does not exist.
You should never SSH into a server to make a configuration change. Instead, you edit an Ansible playbook or a Dockerfile in a feature branch, create a Pull Request, peer-review it, and merge it. Upon merge, a webhook triggers your CI/CD server to apply the changes. This provides an audit trail—crucial for compliance with Norwegian regulations like the Personal Data Act (Personopplysningsloven) enforced by Datatilsynet.
The Stack for Late 2015
- Version Control: Git (hosted on GitLab or GitHub).
- CI/CD Orchestrator: Jenkins with the new "Workflow" plugin (released this year).
- Configuration Management: Ansible 1.9 (Agentless, simple).
- Runtime: Docker 1.9 (or bare metal for databases).
- Infrastructure: CoolVDS NVMe KVM Instances (for raw I/O performance).
Step 1: The Jenkins "Workflow" (Pipeline)
Gone are the days of chaining five different "Freestyle" Jenkins jobs together. With the Jenkins Workflow plugin, we can define our deployment logic in code (Groovy). This file lives in your repository, usually named Jenkinsfile (a convention gaining traction).
Here is a battle-tested example of a workflow script that builds a Docker image and pushes it, provided you have the I/O speed to handle the build process quickly.
node {
stage 'Checkout'
git url: 'git@github.com:yourcompany/backend-api.git'
stage 'Build Docker Image'
// NVMe storage on CoolVDS makes this step 4x faster than standard SATA VPS
sh 'docker build -t registry.internal/backend:b${env.BUILD_NUMBER} .'
stage 'Push to Registry'
sh 'docker push registry.internal/backend:b${env.BUILD_NUMBER}'
stage 'Deploy to Staging'
sh "ansible-playbook -i staging inventory/site.yml --extra-vars 'version=b${env.BUILD_NUMBER}'"
}
Pro Tip: Jenkins creates a heavy I/O load during artifact packaging. If you are running Jenkins on a budget VPS with "noisy neighbors," your build times will fluctuate wildly. We use CoolVDS instances because the dedicated NVMe allocation ensures our builds take 45 seconds, every single time.
Step 2: Idempotency with Ansible 1.9
Once the artifact (Docker image or tarball) is ready, we need to deploy it. Shell scripts are brittle. Ansible is idempotent. This means you can run the same playbook 100 times, and it will only make changes if the system is out of sync.
Below is a robust Ansible task structure for a standard Nginx deployment. Note the use of notify to restart services only when config changes.
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: Ensure Nginx is at the latest version
apt: pkg=nginx state=latest update_cache=yes
- name: Write Nginx configuration
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: 0644
notify:
- restart nginx
- name: Ensure Nginx service is running
service: name=nginx state=started
handlers:
- name: restart nginx
service: name=nginx state=restarted
Warning on Latency: When managing nodes across Europe, network latency affects Ansible's SSH connections. Hosting your Control Node (Jenkins/Ansible) in the same datacenter as your targets (e.g., CoolVDS Oslo zone) significantly reduces playbook execution time.
Step 3: The Immutable Infrastructure with Docker 1.9
Docker 1.9 was released just last month (November 2015), and it brought a game-changer: Overlay Networking. Before this, linking containers across different hosts was a nightmare of port mapping. Now, we can create a multi-host network.
However, running Docker in production requires a kernel that supports cgroups and namespaces properly. This is where virtualization matters.
| Virtualization Type | Docker Compatibility | Performance |
|---|---|---|
| OpenVZ / Virtuozzo | Poor (Kernel shared with host) | High jitter, risky security |
| CoolVDS (KVM) | Native (Own Kernel) | Near-Metal (Dedicated resources) |
Here is how you launch a container on the new overlay network, ensuring it can talk to your database container on a different host without exposing ports to the public internet:
# Create the network (do this once)
docker network create -d overlay my-app-net
# Run the web application
docker run -d \
--name web-01 \
--net=my-app-net \
--restart=always \
-e DB_HOST=database-01 \
mycompany/webapp:latest
The Database Bottleneck: Why I/O Matters
In this automated workflow, you are constantly pulling images, extracting layers, and restarting services. The bottleneck in 2015 is almost always Disk I/O. Standard SATA SSDs often stall under the random write pressure of a busy CI/CD pipeline + Database load.
At CoolVDS, we utilize enterprise-grade NVMe storage. In our benchmarks, a MySQL restore that takes 14 minutes on a standard DigitalOcean droplet takes roughly 3 minutes on a CoolVDS instance. When your "Git-Centric" workflow triggers a full environment rebuild, those minutes save you hours of developer time per week.
Security & Compliance in Norway
Finally, a word on data sovereignty. With the recent invalidation of Safe Harbor by the ECJ (Schrems judgment), storing customer data outside the EEA is riskier than ever. By deploying your automated infrastructure on CoolVDS servers physically located in Norway/Europe, you satisfy the requirements of the Personopplysningsloven.
Conclusion: Automate or Die
The transition from manual sysadmin work to Git-driven automation is painful but necessary. It turns your infrastructure into documentation. It allows you to sleep through the night.
But remember: Automation amplifies the speed of your mistakes just as much as your successes. You need a platform that can handle the throughput. Don't let your sophisticated Jenkins pipeline be choked by cheap shared storage.
Ready to build the future? Deploy a KVM instance on CoolVDS today and experience the difference raw NVMe power makes for your CI/CD pipeline.