Stop kubectl apply-ing into the Void: A Battle-Tested GitOps Workflow for 2024
I still see it happening. Senior engineers, who should know better, pointing their terminal at a production cluster context and running kubectl apply -f . from their local machine. It works nine times out of ten. The tenth time? You overwrite a config map that wasn't versioned, the pods crash loop, and you spend the next four hours explaining to the CTO why the checkout page is throwing 502 errors.
It is May 2024. If you are managing infrastructure manually, you are choosing to fail. The only acceptable state for a production cluster is one that strictly reflects a Git repository.
This is not a theoretical overview. This is the exact GitOps workflow I deploy for clients running critical workloads in Oslo, where data sovereignty (Schrems II) and uptime are non-negotiable. We will use ArgoCD, Kustomize, and GitHub Actions, running on infrastructure that doesn't choke when the controller loop tightens.
1. The Separation of Concerns: CI vs. CD
The biggest mistake teams make is letting their CI pipeline (Jenkins, GitHub Actions, GitLab CI) touch the Kubernetes API server directly. This is a security nightmare. If your CI runner is compromised, your cluster is gone.
In a proper GitOps setup, the CI pipeline has one job: Build artifacts and update the manifest repo. It should never trigger a deployment.
Pro Tip: Use the "Pull Model." Your cluster (via ArgoCD) should reach out to the repo and pull changes. This allows you to lock down your cluster's inbound ports completely. No open API server to the internet.
The Workflow
- Dev: Pushes code to the Application Repo.
- CI: Runs tests, builds the Docker image, pushes to Registry.
- CI: Commits the new image tag to the Infrastructure Repo (k8s manifests).
- ArgoCD: Detects the change in the Infrastructure Repo and syncs the cluster.
2. Structuring the Repository
Do not dump everything into one folder. I use a Kustomize-based structure that separates base configuration from environment overlays. This keeps code DRY (Don't Repeat Yourself).
βββ apps/
β βββ base/
β β βββ deployment.yaml
β β βββ service.yaml
β β βββ kustomization.yaml
β βββ overlays/
β βββ dev/
β β βββ kustomization.yaml
β β βββ patch-replicas.yaml
β βββ prod/
β βββ kustomization.yaml
β βββ patch-resources.yaml
Here is what a production kustomization.yaml looks like in 2024. Note the specific image tag pinning.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namePrefix: prod-
# The CI pipeline only updates this block
images:
- name: ghcr.io/myorg/backend-api
newTag: "v2.4.1-sha256"
patches:
- path: patch-resources.yaml
3. The Engine: ArgoCD Configuration
ArgoCD is the standard. Itβs robust and handles drift detection better than Flux for complex multi-cluster setups. However, out of the box, it can be sluggish if your underlying VPS has high I/O wait (iowait). GitOps controllers are chatty; they are constantly stat-ing the etcd database and git endpoints.
This is where hardware selection becomes architectural. I run my control planes on CoolVDS NVMe instances. Why? because standard SATA SSDs introduce latency that causes ArgoCD to timeout during massive sync operations involving hundreds of resources. When you have 50 microservices trying to reconcile state simultaneously, you need NVMe IOPS.
Here is the Application manifest you should apply to the cluster (the