Stop Deploying by Hand: Mastering Git-Centric Infrastructure
If you are still dragging files via FileZilla or running git pull manually on your production servers in 2014, you are doing it wrong. Iâve seen too many systems crash because a weary sysadmin missed a config file during a 3 AM hotfix. In the high-stakes world of hostingâwhether you're pushing to a cluster in Oslo or a single node in Frankfurtâconsistency is not a luxury. It is survival.
We are witnessing a shift. The industry is moving toward what I call Git-Centric Operations (some are starting to whisper â albeit loosely â about â operations by pull request). The philosophy is simple: Git is the single source of truth. If itâs not in the repo, it doesnât exist. If itâs in the repo, it should be in production automatically.
The Architecture of Automation
A robust workflow in late 2014 isn't about just scripting; it's about orchestration. We aren't just moving code; we are provisioning state. Here is the battle-tested stack we are seeing deploy successful projects across Norway:
- Version Control: Git (Bitbucket or local GitLab instance).
- CI Server: Jenkins (The heavy lifter).
- Configuration Management: Ansible 1.7 (Rising fast against Puppet/Chef due to its agentless nature).
- Infrastructure: KVM-based Virtual Dedicated Servers (CoolVDS).
1. The âPost-Receiveâ Hook: The First Line of Defense
For smaller projects, you don't always need a Jenkins monster. A bare Git repository with a post-receive hook on your CoolVDS instance can bridge the gap between development and production. This ensures that what you push is exactly what runs.
Navigate to your bare repo on the server and edit hooks/post-receive:
#!/bin/bash
TARGET="/var/www/html/production"
GIT_DIR="/var/repo/site.git"
BRANCH="master"
while read oldrev newrev ref
do
if [[ $ref =~ .*/$BRANCH$ ]];
then
echo "Ref $ref received. Deploying ${BRANCH} to production..."
git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f
# Reloading Nginx to catch config changes
service nginx reload
echo "Deployment complete."
else
echo "Ref $ref received. Doing nothing: only the ${BRANCH} branch may be deployed on this server."
fi
done
Warning: This is brute force. It overwrites. For high-availability systems, you need atomic deployments, typically achieved by symlinking versioned directories. But for a quick internal tool? This beats FTP every time.
Moving to Infrastructure as Code (IaC)
Scripts are fragile. State management is robust. In 2014, Ansible is rapidly becoming the tool of choice for those who hate managing agents. You define the desired state of your server in YAML, and Ansible ensures your CoolVDS instances match that state. This is critical for compliance with the Norwegian Data Protection Authority (Datatilsynet)âyou can prove exactly how your server is configured because the code is the documentation.
Here is a real-world playbook.yml snippet for setting up a secured Nginx server optimized for SSD storage:
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: Ensure Nginx is at the latest version
apt: pkg=nginx state=latest update_cache=true
- name: Write the nginx.conf template
template: src=./templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf
notify:
- restart nginx
- name: Ensure Nginx is running
service: name=nginx state=started enabled=yes
handlers:
- name: restart nginx
service: name=nginx state=restarted
When you run ansible-playbook -i hosts playbook.yml, it doesn't matter if you have one CoolVDS node or fifty. They all converge to the same configuration.
The Hardware Bottleneck: Why I/O Matters
Here is the uncomfortable truth most providers hide: Automation kills cheap storage.
When you trigger a build process, run `composer install`, or compile assets using Grunt or Gulp on the server, you generate thousands of small random I/O operations. On a standard HDD (even in RAID 10), your âwait CPUâ (iowait) will spike, and your application latency will skyrocket. Your customers in Oslo don't care that you are deploying; they care that the site is slow.
Pro Tip: Monitor your Disk I/O during a deployment. Runiostat -x 1. If your%utilhits 100% while your queue length grows, your storage is the bottleneck.
This is where the architecture of CoolVDS becomes the logical choice for DevOps professionals. By utilizing pure SSD arrays and KVM virtualization, we eliminate the ânoisy neighborâ effect common in OpenVZ containers. KVM provides dedicated kernel resources. When Jenkins hammers the disk during a build, an SSD-backed KVM instance barely sweats. You get the raw throughput required for rapid CI/CD pipelines.
Optimizing Nginx for the Modern Web (SPDY & SSL)
Google has been pushing HTTPS as a ranking signal, and with the emerging SPDY protocol (the precursor to the upcoming HTTP/2), encryption is no longer just for banks. If you are automating your infrastructure, automate your security.
In your nginx.conf, ensure you are tuning for the lower latency usually found in Norwegian fiber networks (like those connecting to NIX). Don't leave the defaults:
worker_processes auto;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
# Optimize for SSD I/O
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Buffers for high load
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# Timeouts (Aggressive for high-traffic)
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;
# Compression
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml;
}
The Local Reality: Latency & Sovereignty
For those of us operating in the Nordics, physical location is a technical feature. Hosting your infrastructure in the US adds 100ms+ of latency. For a dynamic application involving database queries and API calls, that latency compounds. Hosting on CoolVDS nodes in Europe ensures your RTT (Round Trip Time) to Norwegian ISPs remains in the single digits.
Furthermore, while the Safe Harbor agreement currently allows data transfer to the US, the legal landscape is shifting. Keeping your data (and your client's data) on European soil is the only way to ensure strict adherence to local privacy acts like Personopplysningsloven. Automated deployments via Ansible allow you to easily replicate your environment to compliant zones without manual reconfiguration.
Conclusion
The days of the âCowboy Sysadminâ are ending. To stay competitive in 2014, you must treat your infrastructure as code. You need version control, automated provisioning, and hardware that can keep up with the demands of continuous integration.
Don't let slow I/O or manual errors kill your uptime. Spin up a KVM instance on CoolVDS today, clone your repo, and build a pipeline that lets you sleep at night.