Stop Deploying by Hand: Mastering Git-Centric Workflows
It is 2015, yet I still see senior sysadmins FTPing files to production. Or worse, SSHing into a live server, opening vim, and hot-patching a PHP file while praying the traffic doesn't spike. This is madness. It is not professional, it is not scalable, and quite frankly, it is dangerous.
We are entering an era where the server itself is disposable, but the configuration is sacred. If your server died right now—hardware failure, kernel panic, or an accidental rm -rf /var—could you rebuild it exactly as it was in ten minutes? If the answer is no, you don't have an infrastructure; you have a pet project.
The solution is Git-Centric Operations (some are calling this Infrastructure as Code). By treating your infrastructure definitions just like your application code, you gain version history, rollback capabilities, and peer review for your servers.
The War Story: The "Quick Fix" That Took Down a Retailer
Three months ago, a client came to me after a catastrophic outage. They run a Magento shop targeting the Nordic market. Their lead developer noticed a slow query during a flash sale and decided to tweak the MySQL configuration manually.
He logged in, changed the innodb_buffer_pool_size in /etc/my.cnf, and restarted the service. The server never came back up. Why? He made a typo in the config, and the server ran out of RAM trying to allocate a buffer pool larger than the available physical memory. Because this change wasn't in version control, nobody knew what had changed or why the database refused to start.
We spent four hours diagnosing a typo. Four hours of downtime during a sale.
The Architecture: Git as the Source of Truth
In a proper workflow, no human should ever run commands on the production server shell for deployment. Here is the stack we are implementing for high-availability clients on CoolVDS:
- Source Control: Git (hosted on GitLab or a private bare repo).
- Configuration Management: Ansible (v1.9 is solid).
- CI Server: Jenkins or Bamboo.
- Target Infrastructure: KVM-based VPS (CoolVDS).
Why Ansible?
While Puppet and Chef are powerful, they require agents. Ansible works over SSH. Since we prioritize security and minimalism on our CoolVDS instances, not having an extra agent daemon running in the background is a massive win for performance.
Implementation: The "Push-to-Deploy" Model
Let's look at a practical example. Instead of editing Nginx configs manually, we define them in a playbook. Here is a snippet of an Ansible role compatible with version 1.9:
---
# site.yml
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure nginx is at the latest version
apt: pkg=nginx state=latest update_cache=true
- name: write the nginx.conf
template: src=templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf
notify:
- restart nginx
- name: ensure nginx is running
service: name=nginx state=started
handlers:
- name: restart nginx
service: name=nginx state=restarted
When you commit a change to the nginx.conf.j2 template and push to the master branch, your CI server triggers this playbook. It connects to your VPS, updates the file, and restarts the service. If it fails, you revert the commit and push again. You have an audit trail.
Infrastructure Requirements: The I/O Bottleneck
Moving to automated deployments changes the load profile of your servers. When you trigger a deployment, you are often compiling assets, moving large tarballs, or rewriting database schemas. This is I/O intensive.
On legacy hosting providers using spinning HDDs or shared storage (SAN), a deployment can cause "iowait" to spike, making your live website sluggish while the deploy happens. This is unacceptable.
This is why we architect CoolVDS on local NVMe storage. In 2015, NVMe is still a premium feature for many, but the difference in random Read/Write operations is night and day compared to standard SSDs. With NVMe, you can untar a 2GB release artifact without the CPU steal time jumping through the roof.
Pro Tip: If you are serving customers in Norway, latency is the silent killer. Hosting in Frankfurt or London adds 20-40ms to every round trip. For a Git-heavy workflow involving many small file transfers, that latency adds up. Keeping your VPS in a Norwegian datacenter (like our Oslo facility) keeps your SSH sessions snappy and your site compliant.
The Compliance Angle: Data Sovereignty
We are seeing tighter regulations regarding where data lives. The Norwegian Personal Data Act (Personopplysningsloven) and the Datatilsynet are becoming stricter about how personal data is handled. While the EU Safe Harbor framework exists, recent legal challenges suggest it might not be around forever.
By automating your infrastructure on Norwegian soil using a provider like CoolVDS, you ensure that your customer database—and the backups created by your automation scripts—never leave legal jurisdiction. You cannot script compliance, but you can script the infrastructure that ensures it.
Start Small, but Start Now
You do not need to build a complex Jenkins cluster today. Start with a simple Git post-receive hook on your VPS that checks out the latest code to your web directory. Anything is better than FTP.
But when you are ready to scale, ensure your underlying metal can handle the automation. Automation requires predictable performance. Don't let a cheap, oversold VPS be the reason your automated deploy times out.
Ready to build a professional DevOps environment? Deploy a high-performance KVM instance on CoolVDS today and see what single-digit latency to Oslo feels like.