GitOps: The End of "It Works on My Machine"
I still remember the silence in the Slack channel. It was 2018. A junior dev, exhausted after a sprint, ran a manual kubectl apply context switch and accidentally wiped a production namespace instead of staging. That day, we learned a hard lesson: Humans should not touch the cluster directly.
If you are still SSHing into servers to pull code, or running deployment scripts from a Jenkins agent that has root access to your cluster, you are building a house of cards. In the Nordic hosting market, where stability and compliance (GDPR) are non-negotiable, we need a better way. Enter GitOps.
This isn't just theory. This is the exact workflow we recommend for high-velocity teams running on CoolVDS infrastructure in Oslo.
The Core Principle: Git as the Single Source of Truth
In a GitOps workflow, the desired state of your entire infrastructure is versioned in Git. You do not push changes to the cluster; an agent inside the cluster pulls changes from Git. This inversion of control is critical for security. Your CI pipeline no longer needs cluster-admin credentials.
The Stack (June 2020 Edition)
While the ecosystem is vast, reliability is key. Here is the battle-tested stack we are seeing dominate production environments this year:
- Orchestrator: Kubernetes 1.18+
- GitOps Controller: ArgoCD (v1.5)
- Config Management: Kustomize (native in kubectl now)
- Secret Management: Bitnami Sealed Secrets
Structuring Your Repo for Multi-Environment Sanity
Don't dump everything into one folder. Use Kustomize overlays. This allows you to have a base configuration and specific patches for Staging and Production without duplicating YAML files. This drastically reduces configuration drift.
βββ base
β βββ deployment.yaml
β βββ service.yaml
β βββ kustomization.yaml
βββ overlays
β βββ staging
β β βββ kustomization.yaml
β β βββ replica_patch.yaml
β βββ production
β βββ kustomization.yaml
β βββ replica_patch.yaml
In overlays/production/kustomization.yaml, you enforce higher availability settings suitable for the heavy NVMe I/O loads capable on CoolVDS instances:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- replica_patch.yaml
commonLabels:
environment: production
region: no-oslo-1
Implementing the Reconcile Loop with ArgoCD
Installing ArgoCD is straightforward, but configuring it for "Auto-Heal" is where the magic happens. If someone manually changes a service type from ClusterIP to NodePort for debugging, ArgoCD should detect the drift and revert it immediately.
Here is a declarative Application manifest that ensures your cluster matches Git perfectly:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: payment-service-prod
namespace: argocd
spec:
project: default
source:
repoURL: 'git@gitlab.com:your-org/infra-repo.git'
targetRevision: HEAD
path: overlays/production
destination:
server: 'https://kubernetes.default.svc'
namespace: payments
syncPolicy:
automated:
prune: true # Deletes resources not in Git
selfHeal: true # Reverts manual changes immediately
syncOptions:
- CreateNamespace=true
Pro Tip: Be careful with prune: true in production initially. If you mess up your Git path, ArgoCD will happily delete your entire deployment. Always test dry-runs in Staging first.
The Hardware Reality: Why IOPS Matter for GitOps
GitOps controllers like ArgoCD or Flux are chatty. They constantly query the Kubernetes API server (and by extension, etcd) to compare the live state against the desired state. If your underlying storage is slow, your reconcile loops lag.
We see this often with cheap VPS providers overselling HDD storage. The API server latency spikes, and the GitOps sync times out.
This is why CoolVDS standardizes on KVM virtualization with local NVMe storage. When you run a GitOps controller, you need high random I/O performance (IOPS). Our benchmarks shows that etcd latency on our NVMe instances remains under 2ms even under heavy load. If you are building a platform in Norway, don't let disk I/O be your bottleneck.
Data Sovereignty and Compliance
With the current scrutiny on the EU-US Privacy Shield, Nordic companies are rightfully nervous about where their infrastructure metadata lives. When you use a managed K8s service from a US hyperscaler, the control plane logs usually leave the country.
By running your own cluster on CoolVDS hardware in Oslo:
- Data Residency: Your etcd state (secrets, configs) stays physically in Norway.
- Latency: You connect directly via NIX (Norwegian Internet Exchange), offering sub-5ms latency to most Norwegian ISPs.
- Auditability: You control the entire stack, satisfying the strict requirements of Datatilsynet.
Handling Secrets without a Vault
You cannot commit raw secrets to Git. In 2020, HashiCorp Vault is great but complex to manage for smaller teams. The pragmatic choice is Bitnami Sealed Secrets. It uses asymmetric encryption. You can commit the encrypted secret to Git, and only the controller running inside your cluster can decrypt it.
Workflow:
- Developer creates a secret locally:
kubectl create secret generic db-pass --from-literal=password=supersecret --dry-run=client -o yaml > secret.yaml - Encrypt it:
kubeseal < secret.yaml > sealed-secret.yaml - Commit
sealed-secret.yamlto Git. - ArgoCD syncs it, SealedSecrets controller decrypts it to a native K8s secret.
Conclusion
GitOps is not just a buzzword; it is the standard for operational maturity. It separates the "what" (Git) from the "how" (K8s), allowing you to sleep better at night knowing your infrastructure is immutable and documented.
However, software is only as good as the hardware it runs on. A robust GitOps workflow demands a robust, low-latency foundation. Don't let your deployment velocity be capped by noisy neighbors or poor I/O.
Ready to stabilize your stack? Deploy a high-performance KVM instance on CoolVDS in Oslo today and experience the difference raw NVMe power makes for your Kubernetes control plane.