Console Login

GitOps in Production: Stop `kubectl apply` Before You Wreck Your Cluster

The "Works on My Machine" Era is Over. It's Time for Immutable State.

I still remember the silence in the Slack channel. It was 2019, and a senior engineer had just accidentally applied a staging configuration to the production cluster. Why? Because his terminal context was pointed at the wrong API server. He ran kubectl apply -f deployment.yaml, and within seconds, our Nordic e-commerce client lost their payment gateway integration. That outage cost thousands of kroner per minute.

If you are SSH-ing into servers to pull code, or running `kubectl` commands from your laptop to deploy updates, you are operating on borrowed time. The only way to guarantee stability, auditability, and sanity is GitOps.

This isn't just about automation; it's about making your infrastructure boring. Predictable. In this architecture breakdown, we are going to build a workflow where the state of your cluster strictly reflects the state of a Git repository. No drift. No cowboy edits.

The Architecture: Pull vs. Push

Traditional CI/CD pushes changes: the CI runner builds the docker image and then runs a command to update the cluster. This is flawed. It requires giving your CI tool god-mode access to your Kubernetes cluster. If your CI gets compromised, your infrastructure is gone.

We use the Pull Model (GitOps). An agent inside the cluster (like ArgoCD) watches the Git repo. When it sees a change, it pulls it down and applies it. The cluster protects itself.

The Stack (March 2024 Standards)

  • Orchestration: Kubernetes 1.28/1.29
  • CD Controller: ArgoCD v2.10
  • Secret Management: Bitnami Sealed Secrets or Mozilla SOPS
  • Infrastructure: CoolVDS NVMe Instances (KVM-based)

1. structuring the Repository

Do not mix application source code with infrastructure manifests. I’ve seen teams try it, and it always results in a loop of CI pipelines triggering themselves. Use a separate `infra` repository.

Here is the directory structure I enforce on every project:


β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ base/
β”‚   β”‚   β”œβ”€β”€ guestbook/
β”‚   β”‚   β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ service.yaml
β”‚   β”‚   β”‚   └── kustomization.yaml
β”‚   └── overlays/
β”‚       β”œβ”€β”€ staging/
β”‚       β”‚   β”œβ”€β”€ kustomization.yaml
β”‚       β”‚   └── patch-replicas.yaml
β”‚       └── production/
β”‚           β”œβ”€β”€ kustomization.yaml
β”‚           └── patch-resources.yaml
β”œβ”€β”€ cluster-config/
β”‚   β”œβ”€β”€ namespaces.yaml
β”‚   └── rbac.yaml
└── bootstrap/
    └── root-app.yaml

This structure leverages Kustomize. The `base` folder holds the common logic. The `overlays` folder modifies it for the environment. For example, in staging, you might want 1 replica to save costs on your VPS Norway hosting, but in production, you need 5 replicas with high-availability anti-affinity rules.

2. The Deployment Pipeline

Your Application CI (GitHub Actions/GitLab CI) should do exactly one thing regarding deployment: update the image tag in the `infra` repo. It should not touch the cluster.

Here is a stripped-down GitHub Actions workflow that handles this safely using `yq`:


name: Update Image Tag

on:
  push:
    branches:
      - main

jobs:
  update-manifests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          repository: my-org/infra-repo
          token: ${{ secrets.PAT }}

      - name: Update Image Tag
        run: |
          git config user.name "CI Bot"
          git config user.email "ci@coolvds.com"
          # Update the kustomization file with the new SHA
          cd apps/overlays/staging
          kustomize edit set image my-app=my-registry.com/app:${{ github.sha }}
          git add .
          git commit -m "Bump image to ${{ github.sha }}"
          git push

Once this commit lands, ArgoCD detects the divergence between the desired state (Git) and the live state (Cluster) and initiates a sync.

3. Handling Secrets in Norway (GDPR & Compliance)

You cannot commit `secrets.yaml` to Git. It’s a security violation that will get you fined by Datatilsynet if PII is involved. However, managing secrets manually breaks the "Git as Source of Truth" philosophy.

The solution is Sealed Secrets. You encrypt the secret on your local machine using a public key exposed by the controller running in your cluster. Only the controller can decrypt it.

Step 1: Install the client

brew install kubeseal

Step 2: Seal a secret

kubectl create secret generic db-creds --from-literal=password=SuperSecret123 --dry-run=client -o yaml | kubeseal --controller-name=sealed-secrets-controller --format=yaml > sealed-secret.yaml

Now, `sealed-secret.yaml` is safe to commit. It contains encrypted data that is useless to anyone without the private key stored securely on your CoolVDS instance.

Pro Tip: Back up the master key of your Sealed Secrets controller! If your cluster dies and you restore the Git repo to a new cluster, you cannot decrypt those secrets without the original private key. Store it in an offline, air-gapped location.

4. The Infrastructure Factor: Why Latency Matters

GitOps is chatty. ArgoCD constantly polls your Git repositories. If your hosting provider has poor peering or unstable DNS resolvers, you will see "Unknown" states and sync failures. This is where the hardware underlying your virtualization matters.

Many providers oversell their CPU, leading to "steal time." When ArgoCD tries to reconcile a complex dependency tree, CPU steal can cause timeouts. We specifically configured CoolVDS environments with KVM isolation and NVMe storage to handle the high I/O operations of etcd and the constant reconciliation loops of GitOps controllers.

Furthermore, if your development team is in Oslo or Bergen, hosting your GitOps controller in a datacenter in Frankfurt or US-East adds unnecessary latency to every `kubectl` command and dashboard load. Keeping the control plane local (within the Nordics) ensures the interface feels snappy.

Comparison: Push vs. Pull Performance

Feature CI Push (Jenkins/GitLab) GitOps Pull (ArgoCD)
Security Cluster Access High Risk (CI has Admin access) Secure (Cluster pulls changes)
Drift Detection None (Manual checks required) Instant (Auto-correction)
Disaster Recovery Complex (Re-run all jobs) Fast (`kubectl apply -f argocd`)

5. Defining the Application

Finally, we tell ArgoCD to manage the app. This manifest lives in the cluster (or a bootstrap repo).


apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook-staging
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/my-org/infra-repo.git
    targetRevision: HEAD
    path: apps/overlays/staging
  destination:
    server: https://kubernetes.default.svc
    namespace: staging
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Note the `selfHeal: true` flag. If someone manually changes the replica count on the server, ArgoCD will immediately revert it back to what is defined in Git. This enforces discipline.

Conclusion

Implementing GitOps is not just a technical upgrade; it is a cultural shift. It forces your team to document every infrastructure change in code. It makes audits trivial. And most importantly, it lets you sleep at night knowing your production environment hasn't drifted into an unknown state.

However, a robust GitOps workflow requires a robust foundation. You need high-performance compute to run the controllers and low-latency connectivity to ensure your syncs are instant. Don't build a modern castle on a swamp.

Ready to stabilize your stack? Deploy a KVM-based CoolVDS instance today and spin up your ArgoCD controller in an environment built for performance.