Console Login

GitOps in 2022: Stop kubectl apply-ing Your Way to Disaster

Stop treating your production cluster like a pet.

If you are still SSHing into servers to tweak nginx.conf or running kubectl apply -f from your laptop in 2022, you are a ticking time bomb. I’ve seen it happen too many times: a senior engineer hot-fixes a production issue manually, forgets to commit the change to Git, and three weeks later, the CI pipeline overwrites the fix. Downtime ensues. Panic follows.

This is the reality of Configuration Drift. In the high-stakes environment of European techβ€”where GDPR fines from Datatilsynet can obliterate your marginsβ€”you cannot afford ambiguity. The state of your infrastructure must match the state of your repository. Bit for bit.

Enter GitOps. It’s not just a buzzword; it’s the only sane way to manage Kubernetes at scale. Today, we break down a battle-tested workflow using ArgoCD, Kustomize, and robust infrastructure.

The Architecture: Pull vs. Push

Traditional CI/CD pushes changes. Your Jenkins or GitLab runner has a KUBECONFIG file with admin access to your cluster. This is a security nightmare. If your CI gets breached, your cluster is gone.

GitOps flips this. It uses a Pull Model. The cluster has an agent (like ArgoCD) running inside it. It watches the Git repository. When it sees a change, it pulls the manifest and applies it. No external admin credentials required.

The Directory Structure That Won't Make You Cry

Don't dump everything into root. I recommend a repository structure that separates base configuration from environment overlays. This leverages Kustomize (built into kubectl since v1.14) to keep things DRY (Don't Repeat Yourself).

β”œβ”€β”€ base
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── kustomization.yaml
└── overlays
    β”œβ”€β”€ dev
    β”‚   β”œβ”€β”€ kustomization.yaml
    β”‚   └── replica_patch.yaml
    └── prod
        β”œβ”€β”€ kustomization.yaml
        └── resource_limits.yaml

Implementing the Reconciliation Loop

Let’s look at how we actually tell ArgoCD to manage this. You don't click buttons in the UI; you define the Application as code. This is recursive: an "App of Apps" pattern.

Here is a production-grade Application manifest. Note the sync policies.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-gateway-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:your-org/infra-repo.git'
    targetRevision: HEAD
    path: overlays/prod
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: payments
  syncPolicy:
    automated:
      prune: true    # Deletes resources that are no longer in Git
      selfHeal: true # Reverts manual changes made via kubectl
    syncOptions:
      - CreateNamespace=true

Pro Tip: Enable selfHeal. If a junior admin tries to manually scale a deployment, ArgoCD will immediately revert it back to the state defined in Git. Ruthless? Yes. Necessary? Absolutely.

The Elephant in the Room: Secrets Management

You cannot commit raw secrets to Git. If you do, consider your system compromised. In 2022, the standard for this is Bitnami Sealed Secrets or HashiCorp Vault. For most mid-sized setups, Sealed Secrets is more pragmatic.

It uses asymmetric encryption. You encrypt the secret with a public key (safe to commit). Only the controller running inside your cluster (on CoolVDS) has the private key to decrypt it.

# Install the client side tool
brew install kubeseal

# Encrypt your raw secret
kubeseal --format=yaml --cert=pub-cert.pem < my-secret.json > sealed-secret.yaml
Security Note: Under GDPR (and specifically the fallout from Schrems II), data sovereignty is paramount. If your Git provider is US-based (GitHub/GitLab), encrypted secrets are generally fine, but your decrypted runtime data must reside on sovereign soil. This is why we host our Kubernetes nodes on CoolVDS in Norway. The private keys never leave the jurisdiction.

Infrastructure Matters: The Etcd Bottleneck

GitOps controllers are chatty. They constantly poll Git and query the Kubernetes API server to diff the state. The API server, in turn, pounds the etcd database.

If you run this on a budget VPS with spinning disks or noisy-neighbor SSDs, you will see CrashLoopBackOff errors on your controller. Etcd requires extremely low write latency. If fsync takes too long, the cluster leader election fails.

This is where hardware choice becomes an architectural decision, not just a billing one.

Optimizing Etcd on CoolVDS

We use CoolVDS because they expose raw NVMe performance through KVM. Unlike container-based VPS (OpenVZ/LXC) where IOPS are often pooled and choked, KVM gives us the dedicated throughput needed for a healthy control plane.

When provisioning your node, ensure you tweak your I/O scheduler:

# Check current scheduler
cat /sys/block/vda/queue/scheduler
[mq-deadline] none

# Switch to none for NVMe (let the device handle it)
echo none > /sys/block/vda/queue/scheduler

This reduces CPU overhead by bypassing the OS reordering logic, trusting the NVMe controller's internal logicβ€”crucial when you are pushing thousands of GitOps reconciliation loops per hour.

The