GitOps in the Trenches: Building Bulletproof K8s Workflows in 2021
I still remember the silence in the Slack channel. It was 2019, a Friday afternoon, naturally. A junior developer had run kubectl apply -f . from their laptop, targeting production instead of staging. In seconds, the ingress configurations were overwritten, SSL termination broke, and our load balancers started throwing 502s faster than I could type kubectl rollout undo.
That incident wasn't a personnel failure; it was a process failure. If you are still allowing humans—or even CI servers with god-mode cluster admin privileges—to push changes directly to your cluster, you are playing Russian Roulette with your infrastructure.
Welcome to GitOps. It’s not just a buzzword for 2021; it is the only sane way to manage complex Kubernetes environments. In this guide, we are going to tear down the "Push" model, implement a "Pull" architecture using ArgoCD, and discuss why the underlying metal—specifically high-performance VPS in Norway—matters more than you think for control plane stability.
The Problem: Configuration Drift and Security Nightmares
In a traditional CI/CD setup (the "Push" model), your Jenkins or GitLab CI runner builds a Docker image and then executes a deployment command directly against the Kubernetes API.
This approach has two massive flaws:
- Security: You have to store your Cluster Admin credentials inside your CI tool. If your CI gets compromised (hello, SolarWinds supply chain attacks), your production cluster is gone.
- Drift: If someone manually changes a resource limit on the cluster to "fix" an issue during a fire, your Git repository no longer reflects reality. The next deployment might fail or revert the fix unexpectedly.
The Solution: The Pull Model
With GitOps, the cluster pulls its own state. An operator inside the cluster (like ArgoCD or Flux v2) watches a Git repository. When it sees a change in the manifest, it synchronizes the cluster state to match Git. Git becomes the single source of truth.
Pro Tip: In 2021, separating your Application Code repo from your Infrastructure Config repo is not optional—it's mandatory. It prevents CI loops and keeps your commit history clean. Monorepos are fine for code, but keep your YAML manifests separate.
Step 1: The Tooling Stack
For this architecture, we rely on battle-tested OSS:
- Kubernetes v1.20+: Stable, reliable.
- ArgoCD v2.0: The new UI is cleaner, and the performance improvements in the repo-server are noticeable.
- Kustomize: For overlay management (Staging vs. Production).
- CoolVDS NVMe Instances: Hosting the control plane.
Step 2: Installing ArgoCD
Let's get straight to the terminal. We create a namespace and apply the manifests.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Once the pods are running, you need to access the UI. In a production environment on CoolVDS, you would configure an Ingress with Let's Encrypt. For now, let's port-forward to verify:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Step 3: Defining the Application
Here is where the magic happens. Instead of writing a pipeline script, you define an Application CRD. This tells ArgoCD what to deploy and where.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:my-org/k8s-manifests.git'
targetRevision: HEAD
path: overlays/production
destination:
server: 'https://kubernetes.default.svc'
namespace: payments
syncPolicy:
automated:
prune: true
selfHeal: true
Notice the selfHeal: true flag. This is the killer feature. If a sysadmin manually deletes a Service in the cluster, ArgoCD notices the deviation immediately and recreates it. Zero downtime. Zero panic.
Managing Secrets: The Elephant in the Room
You cannot commit secrets.yaml to Git. If you do, your credentials are burned.
In 2021, the cleanest approach for small to medium teams is Bitnami Sealed Secrets. It uses asymmetric encryption. You encrypt the secret on your laptop using a public key, commit the "SealedSecret" CRD to Git, and the controller inside the cluster (which holds the private key) decrypts it.
Installation:
# Install client-side tool (Linux)
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/kubeseal-linux-amd64 -O kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
# Install controller in cluster
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml
Workflow:
# Create a raw secret (dry run)
kubectl create secret generic db-creds --from-literal=password=SuperSecure123 --dry-run=client -o yaml > secret.yaml
# Seal it
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml
# Now it is safe to git push sealed-secret.yaml
The Infrastructure Factor: Why Latency & Compliance Matter
You might have the perfect YAML, but if your etcd latency spikes, your Kubernetes cluster will destabilize. I've seen clusters fall apart because the control plane nodes were on cheap, oversold hardware with "noisy neighbors" stealing CPU cycles.
When running GitOps operators like ArgoCD, the controller is constantly polling your Git repositories and the Kubernetes API. It is an I/O and network-intensive process.
| Feature | Standard Cloud VPS | CoolVDS NVMe KVM |
|---|---|---|
| Disk I/O | Often SATA/SAS (Slow) | Native NVMe (Ultra Low Latency) |
| Virtualization | Container-based (Shared Kernel) | KVM (Kernel Isolation) |
| Location | Generic EU Region | Oslo, Norway (Datatilsynet Compliant) |
For Norwegian businesses, the Schrems II ruling (July 2020) changed everything. Transferring personal data to US-owned clouds is now a legal minefield. Hosting your GitOps control plane and your production workloads on CoolVDS in Norway ensures you aren't just technically performant, but legally secure. We peer directly at NIX (Norwegian Internet Exchange), meaning your sync latency is measured in single-digit milliseconds.
Handling Image Updates Automatically
We want the cluster to update when we build a new Docker image. Since we don't run kubectl set image anymore, how do we update the manifest?
You have two choices:
- CI Commit: Your CI pipeline (GitLab CI/Jenkins) runs
sedoryqto update the image tag in the config repo, then commits and pushes. ArgoCD picks up the change. - Argo CD Image Updater: A newer tool that watches your container registry and automatically patches the Git repository.
For most teams in 2021, the CI Commit method is more transparent and easier to debug. Here is a snippet for your .gitlab-ci.yml:
deploy_manifests:
stage: deploy
image: bitnami/git:2.32.0
script:
- git config --global user.email "ci-bot@coolvds.com"
- git config --global user.name "CI Bot"
- git clone https://oauth2:${ACCESS_TOKEN}@gitlab.com/my-org/k8s-config.git
- cd k8s-config/overlays/production
- sed -i "s/newTag: .*/newTag: ${CI_COMMIT_SHORT_SHA}/" kustomization.yaml
- git commit -am "Update image to ${CI_COMMIT_SHORT_SHA}"
- git push origin master
Final Thoughts
GitOps is about discipline. It moves the complexity from the deployment script to the architecture itself. It creates an audit trail that satisfies even the strictest auditors.
But software discipline needs hardware reliability. Don't build a fragile glass castle on a swamp. Ensure your Kubernetes nodes are running on dedicated, high-performance resources. Low latency and high disk I/O are the unsung heroes of a stable GitOps workflow.
Ready to harden your infrastructure? Deploy a KVM-based, NVMe-powered instance on CoolVDS today and get your latency to Oslo under 2ms.