Mastering GitOps Workflows: Moving Beyond "kubectl apply" in 2020
It is 3:00 AM. Your pager just went off. A deployment script failed halfway through, leaving your production cluster in a zombie state—half the pods are running version 1.2, the other half are crash-looping version 1.3, and the rollback script just timed out.
If this sounds familiar, your deployment pipeline is fragile. In the Nordic hosting market, where stability and uptime are the currency we trade in, fragility is unacceptable. The industry is rapidly shifting away from imperative scripts toward GitOps.
As of early 2020, GitOps isn't just a buzzword from Weaveworks anymore; it is the standard for mature Kubernetes operations. But implementing it requires more than just installing a tool. It requires a fundamental shift in how we treat infrastructure, specifically regarding the underlying compute resources.
The Problem with Push-Based CI/CD
Traditionally, we used Jenkins or GitLab CI to build an image and then run a script that connects to the cluster to deploy it. This is the "Push" model.
It has two massive security flaws:
- Credential Leakage: Your CI server needs
cluster-admin(or high-level) access to your production environment. If your CI gets breached, your production is gone. - Configuration Drift: If a developer manually edits a deployment via
kubectl editto hotfix a bug, your Git repository no longer reflects reality. The next deployment will overwrite that fix, causing a regression.
The Solution: The Pull Model (GitOps)
In a GitOps workflow, the cluster updates itself. You run an operator inside the cluster (like ArgoCD or Flux) that watches a Git repository. When the repo changes, the operator pulls the changes and applies them. The cluster credentials never leave the cluster.
1. The Stack
For this guide, we assume a setup common among high-performance European dev teams in 2020:
- Infrastructure: KVM-based VPS (Crucial for
etcdperformance). - Orchestrator: Kubernetes 1.17.
- GitOps Controller: ArgoCD v1.4.
- Registry: GitLab Container Registry.
2. Implementing the Controller
First, we deploy the controller. Do not use messy scripts. Use declarative manifests.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v1.4.2/manifests/install.yaml
Once running, we define an Application. This Custom Resource Definition (CRD) tells ArgoCD which Git repo to watch and where to deploy it.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: https://gitlab.com/your-org/infra-manifests.git
targetRevision: HEAD
path: k8s/overlays/oslo-prod
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
With selfHeal: true, if anyone manually changes the cluster, ArgoCD detects the drift and reverts it immediately. This enforces the "Single Source of Truth."
Infrastructure Performance: The Hidden Bottleneck
GitOps controllers are chatty. They constantly poll Git repositories and query the Kubernetes API server to compare states. This generates significant load on the control plane.
Pro Tip: Avoid OpenVZ or LXC containers for hosting Kubernetes nodes running GitOps controllers. The lack of kernel isolation leads to "noisy neighbor" issues where CPU steal time can delay synchronization loops.
We see this constantly with clients migrating to CoolVDS. They attempt to run K8s on cheap, oversold container-based VPS providers. The result is etcd latency spikes. Kubernetes requires low fsync latency. If your storage I/O is inconsistent, your cluster state becomes unstable.
At CoolVDS, we utilize pure KVM virtualization backed by local NVMe arrays. This ensures that when etcd writes to disk, it happens instantly. In our benchmarks against standard SATA SSD hosting, NVMe-backed KVM instances reduce GitOps sync time by up to 40% during high-load deployments.
Handling Secrets in 2020
You cannot commit raw secrets to Git. That is a GDPR violation waiting to happen, especially with the strict oversight of the Norwegian Datatilsynet. Since we are using GitOps, we need a way to encrypt secrets in Git and decrypt them inside the cluster.
Bitnami Sealed Secrets is the standard choice right now.
1. Install the controller on your cluster:
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.8/controller.yaml
2. Encrypt your secret locally:
# Create a raw secret (dry-run, don't submit to cluster)
kubectl create secret generic db-creds --from-literal=password=SuperSecret -o json > secret.json
# Seal it
kubeseal --format=yaml < secret.json > sealed-secret.yaml
The resulting sealed-secret.yaml can be safely committed to your public or private Git repo. Only the controller running on your CoolVDS instance possesses the private key to decrypt it.
Data Sovereignty and Latency
For Norwegian businesses, hosting your GitOps infrastructure outside the country introduces two risks:
- Latency: Round trips to Frankfurt or Amsterdam add milliseconds. For a high-frequency trading bot or a real-time gaming backend, the 20-30ms difference between Oslo and Central Europe matters. Using CoolVDS keeps your ping to NIX (Norwegian Internet Exchange) under 3ms.
- Compliance: While GDPR applies across Europe, ensuring your data—and your deployment metadata—physically resides in Norway simplifies compliance audits.
Summary
GitOps is not the future; it is the present reality for scalable systems in 2020. By decoupling deployment from CI, you increase security and stability.
However, software is only as good as the hardware it runs on. A robust GitOps workflow demands high IOPS and strict resource isolation. Don't let IOwait kill your deployment pipelines.
Ready to stabilize your stack? Deploy a KVM-based, NVMe-powered instance on CoolVDS today and see the difference raw performance makes to your Kubernetes control plane.