Console Login

Stop SSH-ing into Production: Mastering Git-Centric Infrastructure in 2016

Stop SSH-ing into Production: Mastering Git-Centric Infrastructure in 2016

It is 3:00 AM on a Tuesday. Your pager is screaming. The primary load balancer just dropped all connections because a junior developer manually tweaked the nginx.conf file five hours ago to "fix a quick bug" and didn't document it. When the autoscaler spun up a new node, it pulled the old configuration. Disaster.

If you are still logging into servers via SSH to run apt-get install or edit config files with vim, you are doing it wrong. In the Nordic hosting market, where reliability is currency and hourly rates for downtime are astronomical, we cannot afford "snowflake" servers.

The solution is Infrastructure as Code (IaC) driven by Git. While some are starting to call this "Git-driven operations," the concept is simple: Git is the single source of truth. Nothing exists in your infrastructure unless it is committed to the repository.

The Architecture of a Git-Centric Workflow

In late 2016, we finally have the toolchain to make this seamless. We aren't just writing shell scripts anymore. We are orchestrating states.

The workflow looks like this:

  1. Code: Developer commits a change (app code or infrastructure config) to a feature branch.
  2. Verify: CI Server (Jenkins or GitLab CI) runs unit tests and linters.
  3. Merge: The branch is merged to master.
  4. Deploy: A CD pipeline automatically pulls the artifact and applies the state via Ansible or updates a Docker service.
Pro Tip: Never allow developers direct SSH access to production environment. Use an "SSH Bastion" or "Jump Host" only for emergency debugging, and log every keystroke. Real changes happen via git push.

The Tooling: GitLab CI + Ansible + Docker

While Chef and Puppet have served us well, Ansible has won the battle for simplicity in 2016. It is agentless, uses SSH, and reads readable YAML. Pair this with Docker (now stable enough for production with v1.12), and you have an immutable infrastructure.

1. Define the Environment in Code

Let's look at a real-world Ansible playbook for setting up a web node. This ensures that every time you deploy, the server looks exactly the same. No drift.

--- 
- hosts: webservers
  become: yes
  vars:
    nginx_worker_connections: 1024
    keepalive_timeout: 65

  tasks:
    - name: Ensure Nginx is installed
      apt:
        name: nginx
        state: present
        update_cache: yes

    - name: Deploy custom Nginx configuration
      template:
        src: templates/nginx.conf.j2
        dest: /etc/nginx/nginx.conf
        validate: 'nginx -t -c %s'
      notify:
        - restart nginx

  handlers:
    - name: restart nginx
      service:
        name: nginx
        state: restarted

Notice the validate command? That prevents you from breaking production with a syntax error. If the config is invalid, Ansible refuses to deploy it.

2. The Pipeline Configuration

Using GitLab CI (which has seen massive adoption this year), we can trigger this playbook automatically. Here is a .gitlab-ci.yml example that builds a Docker container and deploys it:

stages:
  - build
  - deploy

build_image:
  stage: build
  script:
    - docker build -t registry.example.com/myapp:latest .
    - docker push registry.example.com/myapp:latest

 deploy_production:
  stage: deploy
  image: williamyeh/ansible:ubuntu16.04
  only:
    - master
  script:
    - echo "$SSH_PRIVATE_KEY" > deploy_key.pem
    - chmod 600 deploy_key.pem
    - ansible-playbook -i inventory/prod playbook.yml --private-key=deploy_key.pem

The Infrastructure Bottleneck: Why I/O Matters

Here is the ugly truth about automated pipelines: they chew up I/O. When you are running concurrent builds, creating Docker images, and pushing artifacts, you are hammering the disk.

I recently worked on a project migrating a Magento cluster to a CI/CD workflow. On their old legacy VPS provider, the deployment pipeline took 14 minutes. Why? Because the spinning hard drives (HDD) couldn't handle the random read/write operations of compiling assets and building containers.

We moved the infrastructure to CoolVDS, which utilizes pure NVMe SSD storage. The result? The deployment time dropped to 3 minutes.

Metric Legacy HDD VPS CoolVDS NVMe
Random IOPS (4k) ~300 ~50,000+
Docker Build Time 480 seconds 110 seconds
Database Restore Test Fail (timeout) Success (22s)

If you are automating, you need I/O headroom. CPU steal time and I/O wait will kill your pipeline faster than a syntax error.

Norwegian Context: Latency and Sovereignty

For those of us operating out of Norway or serving Nordic clients, we have the looming shadow of data regulations. With the EU data protection framework tightening (and the GDPR text finalized earlier this year), knowing exactly where your code and data reside is critical.

By using a Norwegian-optimized provider like CoolVDS, you ensure that:

  • Data Residency: Your production data stays within the jurisdiction you expect, simplifying compliance with Datatilsynet guidelines.
  • Low Latency: Pushing a 2GB Docker image to a server in Oslo from an office in Bergen takes seconds on our network peering, compared to slogging it over the Atlantic to US-East.

Conclusion: Automate or Expire

The days of manual system administration are over. The complexity of modern stacks—microservices, containers, distributed databases—demands that we treat infrastructure as software.

Adopting a Git-centric workflow requires two things: discipline in your team and raw power in your infrastructure. You bring the code; we provide the iron.

Don't let slow disks bottleneck your innovation. Deploy a CoolVDS high-performance NVMe instance today and watch your Jenkins build times plummet.