GitOps Workflows: Stop Using 'kubectl' in Production
It is nearly 2019. If you are still SSH-ing into your production servers to pull a git repo, or worse, running kubectl apply -f deployment.yaml from your laptop, you are operating on borrowed time. I have seen entire clusters vanish because a tired engineer targeted the wrong context in their terminal. I've seen 'configuration drift' turn a stable staging environment into a debugging nightmare because someone hot-fixed production and forgot to commit the change.
In the high-stakes world of Nordic hosting, where uptime is expected and latency to Oslo must be measured in single-digit milliseconds, we need a better way. Enter GitOps. Coined by Weaveworks last year, it is rapidly becoming the standard for managing Kubernetes clusters. It forces your infrastructure to be as version-controlled as your application code.
But GitOps isn't just a buzzword; it's a survival strategy for the modern DevOps engineer. Let's break down how to build a pipeline that satisfies both the Norwegian Datatilsynet (Data Protection Authority) and your need for sleep.
The Problem with "Push" Deployments
Most traditional CI/CD pipelines (Jenkins, Travis CI, GitLab CI) utilize a "Push" model. The CI server builds the Docker image, runs tests, and then—here is the security flaw—it connects to your Kubernetes cluster to run a deployment command.
To make this work, you have to give your CI server credentials (like a kubeconfig file) with admin rights to your cluster. If your CI server is compromised, your entire infrastructure is exposed. Furthermore, if you change something manually in the cluster, the CI server has no idea. You have drift.
The GitOps "Pull" Model
GitOps flips this on its head. Your cluster doesn't accept commands from the outside; an agent inside the cluster watches a Git repository.
- Single Source of Truth: Your Git repository contains the entire state of your cluster (YAML manifests, Helm charts).
- The Operator: A tool like Weave Flux runs inside your K8s cluster.
- Synchronization: When you merge a Pull Request to the
masterbranch, Flux sees the change and applies it to the cluster.
This provides an instant audit trail. Who changed the replica count from 3 to 5? git blame tells you exactly who, when, and why. For companies operating under strict GDPR regulations in Europe, this level of auditability is not optional—it's essential.
Configuration Snippet: The Flux Operator
In 2018, deploying Flux is straightforward using Helm (v2). However, you must be careful with Tiller (the server-side component of Helm) regarding security permissions. We recommend securing Tiller with mutual TLS, or simply using Flux to apply raw manifests if your team is small.
# Installing Flux via Helm (Standard 2018 Approach)
helm repo add weaveworks https://weaveworks.github.io/flux
helm install --name flux \
--set git.url=git@gitlab.com:your-org/k8s-config.git \
--set git.branch=master \
--namespace flux \
weaveworks/flux
Once running, you retrieve the SSH key generated by Flux to add it as a Deploy Key in your GitLab or GitHub repository:
kubectl -n flux logs deployment/flux | grep identity.pub | cut -d '"' -f 2
The Foundation: Why Hardware Matters for GitOps
Automation is only as good as the infrastructure it runs on. Kubernetes is notoriously I/O heavy. The etcd key-value store, which maintains the state of your cluster, requires very low latency storage. If disk fsync latency spikes, your cluster can become unstable or lose leader election.
This is where the "commodity VPS" market fails you. Many providers oversell their storage I/O, leading to "noisy neighbor" issues where another customer's database backup slows down your API response times.
Pro Tip: Always check your disk latency. On a CoolVDS NVMe instance, we typically see write latencies well under 1ms. You can test this yourself withioping:
ioping -c 10 .
At CoolVDS, we utilize KVM virtualization. Unlike container-based virtualization (like OpenVZ/LXC), KVM provides a true kernel isolation. This is critical for running Docker/Kubernetes, as you avoid the kernel compatibility issues that often plague nested container setups. When you are building a GitOps pipeline that relies on spinning up dynamic environments, you need the raw power and isolation of KVM combined with local NVMe storage.
Handling Secrets in GitOps
A common objection to GitOps is: "I can't commit passwords to Git!" Correct. You should never commit raw secrets.
The 2018 best practice solution is Bitnami Sealed Secrets. It allows you to encrypt a secret on your laptop using a public key. The result is a "SealedSecret" CRD (Custom Resource Definition) that is safe to commit to a public repo. The controller running in your cluster—and only that controller—has the private key to decrypt it.
# 1. Create a raw secret locally (dry-run)
kubectl create secret generic db-pass --from-literal=password=SuperSecure123 --dry-run -o json > secret.json
# 2. Seal the secret (safe to commit to Git)
kubeseal < secret.json > sealed-secret.json
# 3. Commit sealed-secret.json to your repo
git add sealed-secret.json && git commit -m "Add db password"
Local Nuances: The Norwegian Context
Latency matters. If your dev team is in Oslo or Trondheim, pulling heavy Docker images from a server in US-East is painful. By hosting your Kubernetes nodes and your private Docker Registry on CoolVDS servers located in Europe, you leverage the peering at NIX (Norwegian Internet Exchange). This ensures that your image pull times are negligible, speeding up the feedback loop for your developers.
Moreover, with the enforcement of GDPR this year, data sovereignty is paramount. Ensure your persistent volumes (PVs) and database backups reside within the EEA to avoid legal headaches regarding Schrems and data transfer protocols.
Sample GitLab CI Pipeline for GitOps
Here is how a clean .gitlab-ci.yml looks when you separate CI (Build) from CD (GitOps). Notice we do not run kubectl here. We simply build the image and update the manifest repo.
stages:
- build
- update-manifests
build_image:
stage: build
image: docker:stable
services:
- docker:dind
script:
- docker build -t registry.coolvds.com/myapp:$CI_COMMIT_SHA .
- docker push registry.coolvds.com/myapp:$CI_COMMIT_SHA
update_gitops_repo:
stage: update-manifests
image: alpine:3.8
before_script:
- apk add --no-cache git openssh
- # Setup SSH keys for git write access here
script:
- git clone git@gitlab.com:your-org/k8s-config.git
- cd k8s-config
- sed -i "s/myapp:.*$/myapp:$CI_COMMIT_SHA/" deployment.yaml
- git add deployment.yaml
- git commit -m "Bump version to $CI_COMMIT_SHA"
- git push origin master
Conclusion
GitOps brings sanity to infrastructure management. It turns operations into a merge request workflow that every developer understands. But remember, a robust software pipeline requires robust hardware. Don't let IOwait kill your Kubernetes master nodes.
If you are ready to build a serious GitOps infrastructure with predictable performance and low latency in Norway, verify your foundation first.
Spin up a high-performance KVM instance on CoolVDS today and stop fighting with slow hardware.