Console Login

GitOps Workflows in 2025: Stop 'kubectl applying' Into Production

GitOps Workflows in 2025: Stop 'kubectl applying' Into Production

I still remember the silence on a Zoom call three years ago. A junior dev had just run kubectl apply -f . from their laptop, accidentally reverting a critical hotfix we had patched directly onto the cluster an hour prior. The payment gateway for a major Oslo retailer went dark for 14 minutes. That creates the kind of silence that gets people fired.

If you are still allowing manual interaction with your cluster API server in 2025, you are not managing infrastructure; you are gambling. The industry standard has shifted aggressively toward GitOps not because it's trendy, but because it provides an immutable audit trailβ€”something the Datatilsynet (Norwegian Data Protection Authority) tends to appreciate when they come knocking.

Here is how to build a GitOps workflow that actually works, designed for the high-compliance, high-stability requirements of the Nordic market.

The Architecture: Pull vs. Push

Forget CI-driven deployments. Pushing directly from Jenkins or GitHub Actions to your Kubernetes cluster opens a security hole; you have to give your CI runner cluster-admin credentials. That is a massive attack surface.

We use the Pull Model. The cluster pulls its own state. The agent sits inside the infrastructure.

  • Source of Truth: Git (GitHub/GitLab).
  • Controller: ArgoCD (running inside the cluster).
  • Templating: Kustomize (native, no Tiller overhead).
  • Infrastructure: CoolVDS NVMe-backed Instances (because etcd latency matters).
Pro Tip: When hosting your Kubernetes control plane, disk I/O latency is the silent killer. If etcd writes take >10ms, your API server starts timing out requests. This is why we standardize on local NVMe storage at CoolVDS, ensuring sub-millisecond write latency that network storage often can't guarantee.

Repository Structure: The Monorepo Strategy

Don't overcomplicate this. For most teams operating in Europe, a clean monorepo for infrastructure manifests is superior to scattering configs across microservice repos. It simplifies access control.


β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ base/
β”‚   β”‚   └── nginx-ingress/
β”‚   └── overlays/
β”‚       β”œβ”€β”€ prod-oslo/
β”‚       └── stage-frankfurt/
β”œβ”€β”€ clusters/
β”‚   β”œβ”€β”€ coolvds-norway-01/
β”‚   └── coolvds-germany-01/
└── tenants/
    β”œβ”€β”€ team-backend/
    └── team-frontend/

The Engine: ArgoCD Configuration

Installing ArgoCD is trivial. Configuring it for production requires discipline. We declare the ArgoCD Application itself as code (App-of-Apps pattern). This ensures that even your CD pipeline is versioned.

Here is a production-ready Application manifest that utilizes the directory structure above:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-cluster-boot
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:your-org/infra-monorepo.git'
    targetRevision: HEAD
    path: clusters/coolvds-norway-01
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

The selfHeal: true flag is the magic. If someone manually changes a Service type from ClusterIP to NodePort to "test something quick," ArgoCD detects the drift and reverts it immediately. Brutal? Yes. Necessary? Absolutely.

Handling Secrets (GDPR Compliance)

You cannot commit secrets.yaml to Git. If you do, your repository is compromised forever. In 2025, the debate is settled: use External Secrets Operator (ESO). It fetches secrets from a secure vault (like Vault, AWS SSM, or Azure KeyVault) and injects them as native Kubernetes secrets.

However, for smaller, self-hosted setups on CoolVDS where you might not want the overhead of HashiCorp Vault, Sealed Secrets by Bitnami is the pragmatic choice. You encrypt the secret with the cluster's public key, and only the controller inside the cluster can decrypt it.

Workflow for Sealed Secrets:

# 1. Developer creates a raw secret locally
echo -n "super-secure-db-password" > db-pass.txt
kubectl create secret generic db-credentials --from-file=password=db-pass.txt --dry-run=client -o yaml > secret.yaml

# 2. Encrypt it using the public key (safe to commit)
kubeseal --cert=pub-cert.pem < secret.yaml > sealed-secret.yaml

# 3. Commit sealed-secret.yaml to Git
git add sealed-secret.yaml && git commit -m "Add db credentials"

The CI/CD Handshake

Your CI pipeline (GitHub Actions/GitLab CI) should strictly build Docker images and update the manifest. It should never touch the cluster.

Here is a sanitized GitHub Actions workflow step that updates the image tag in the Git repo, triggering ArgoCD to sync:

name: Update Image Tag
run: |
  git config user.name "CI Bot"
  git config user.email "ci@coolvds.com"
  cd k8s-manifests/apps/overlays/prod-oslo
  kustomize edit set image my-app=registry.coolvds.com/app:${{ github.sha }}
  git add kustomization.yaml
  git commit -m "Bump image tag to ${{ github.sha }}"
  git push origin main

Infrastructure Performance & Latency

GitOps relies heavily on the control plane. The ArgoCD application controller constantly compares the live state (etcd) with the desired state (Git). This generates significant CPU load and network calls.

I have seen ArgoCD crash looping on budget VPS providers because of "noisy neighbors" stealing CPU cycles. When the reconciliation loop lags, your deployments stall.

This is where hardware choice becomes architectural. On CoolVDS, we isolate CPU cores and provide dedicated NVMe throughput. For a production GitOps setup in Norway, latency to the NIX (Norwegian Internet Exchange) matters. If your Git repo is hosted in the US and your cluster is in Oslo, the reconciliation latency adds up. Hosting your container registry and mirrors on local, low-latency infrastructure reduces the "time-to-sync" from minutes to seconds.

Metric Standard VPS CoolVDS (High Perf)
Disk IOPS (Random Write) ~500-1000 ~10,000+
Reconciliation Loop Delay High Variance Consistent
Network Latency (Oslo) 15-30ms <2ms

Final Thoughts

GitOps is not a tool; it is a discipline. It moves the complexity from the deployment script to the architecture definition. It creates a history of your infrastructure that is as readable as your code.

But software discipline requires hardware reliability. A GitOps operator that cannot talk to the API server because of I/O wait times is useless. Ensure your foundation is solid.

Ready to stabilize your pipeline? Spin up a CoolVDS instance in our Oslo datacenter today and experience the difference dedicated resources make for your Kubernetes control plane.