Console Login

Stop Using kubectl apply: A Battle-Tested GitOps Workflow for High-Stakes Environments

Stop Using kubectl apply: A Battle-Tested GitOps Workflow for High-Stakes Environments

If you are still SSH-ing into your production cluster to hotfix a ConfigMap, you have already failed. I don't care if it's a "quick fix" for a client in Bergen who needs their site up by lunch. Manual intervention is the root cause of configuration drift, and drift is the silent killer of stability. In the unforgiving landscape of systems administration, hope is not a strategy. Git is.

I've spent the last decade cleaning up "quick fixes" that turned into weekend-long outages. The only way to guarantee consistency across environmentsβ€”whether you're hosting in a basement in Trondheim or on enterprise infrastructureβ€”is GitOps. This isn't about jumping on a trend. It's about sleep preservation. It is about ensuring that the state of your infrastructure in git matches the state of your infrastructure in reality, byte for byte.

The Architecture of Truth

GitOps flips the traditional CI/CD push model on its head. Instead of your CI server pushing changes to the cluster (which requires giving your CI server god-mode access to productionβ€”a terrifying security risk), you have an agent inside the cluster pulling changes. The cluster reconciles itself.

Pro Tip: The Norwegian Data Protection Authority (Datatilsynet) looks very kindly on architectures where deployment access is audit-logged via Git commits. It makes proving who changed what, and when, trivial during a GDPR audit.

The Tooling Stack (Late 2023 Edition)

For this architecture, we are looking at the industry standards as of December 2023:

  • Orchestrator: Kubernetes 1.28+
  • GitOps Controller: ArgoCD v2.9 (The UI is superior to Flux for team visibility)
  • Config Management: Kustomize (Native, no templating headaches like Helm)
  • Infrastructure: CoolVDS NVMe KVM Instances (High IOPS are critical for etcd performance)

Repository Structure: The Monorepo vs. Polyrepo Debate

Don't overcomplicate this. For most teams operating in the Nordic mid-market, a split approach works best. Keep application source code in separate repos, but keep your manifests in a dedicated infrastructure monorepo.

Here is the directory structure I enforce on every project:

. β”œβ”€β”€ apps β”‚ β”œβ”€β”€ payment-service β”‚ β”‚ β”œβ”€β”€ base β”‚ β”‚ β”œβ”€β”€ overlays β”‚ β”‚ β”‚ β”œβ”€β”€ dev β”‚ β”‚ β”‚ └── prod β”‚ └── frontend β”œβ”€β”€ cluster-config β”‚ β”œβ”€β”€ rbac β”‚ └── policies └── infrastructure β”œβ”€β”€ ingress-nginx └── cert-manager

Bootstrapping the Cluster on CoolVDS

Before we layer on the GitOps magic, the metal matters. GitOps controllers like ArgoCD are chatty. They constantly poll Git and query the Kubernetes API server. If your underlying storage is running on spinning rust or over-provisioned shared storage, you will see reconciliation lag. I've seen `etcd` latency spike to 40ms on budget providers, causing the cluster leader to flap.

We use CoolVDS because KVM guarantees resource isolation. When I say I want 4 vCPUs, I get them. I don't share them with a crypto-miner next door. Here is how we provision the base layer using Terraform, ensuring we hit the Oslo-adjacent zones for minimal latency:

resource "coolvds_instance" "k8s_control_plane" {
  name              = "k8s-cp-01"
  region            = "no-oslo-1" # Low latency for Nordic user base
  image             = "debian-12"
  plan              = "cpu-optimized-4c-8g"
  
  # NVMe is mandatory for etcd stability
  storage_type      = "nvme"
  disk_size         = 80

  ssh_keys = [
    var.admin_ssh_key
  ]

  network_security_group_ids = [coolvds_security_group.k8s_api.id]

  user_data = <<-EOF
    #!/bin/bash
    # Disable swap for K8s compliance
    swapoff -a
    sed -i '/ swap / s/^/#/' /etc/fstab
    
    # Tuning network for high throughput
    cat <<SYSCTL > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.ipv4.ip_forward                 = 1
    net.ipv6.conf.all.forwarding        = 1
    SYSCTL
    sysctl --system
  EOF
}

Implementing ArgoCD

Once the cluster is humming, we install ArgoCD. But we don't just kubectl apply the manifest manually and walk away. We bootstrap it, and then let ArgoCD manage itself.

The ApplicationSet Pattern

Managing individual Application manifests is tedious. Use an ApplicationSet to automatically discover and deploy applications defined in your git monorepo. This is how you scale from 5 microservices to 50 without hiring more staff.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: production-apps
  namespace: argocd
spec:
  generators:
  - git:
      repoURL: https://git.coolvds-internal.com/devops/infra-manifests.git
      revision: HEAD
      directories:
      - path: apps/*
  template:
    metadata:
      name: '{{path.basename}}'
    spec:
      project: default
      source:
        repoURL: https://git.coolvds-internal.com/devops/infra-manifests.git
        targetRevision: HEAD
        path: '{{path}}'
      destination:
        server: https://kubernetes.default.svc
        namespace: '{{path.basename}}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

The "Secrets" Problem (Schrems II & GDPR)

You cannot check secrets into Git. This is rule number one. In a post-Schrems II world, where data sovereignty is paramount, accidentally pushing a database credential to a public (or even shared private) repo is a reportable incident.

My preferred approach in 2023 is Sealed Secrets by Bitnami. It uses asymmetric cryptography. You can commit the encrypted secret safely. The private key lives only inside the controller on your cluster.

Step 1: Install the client

brew install kubeseal

Step 2: Seal a secret

kubectl create secret generic db-creds --from-literal=password=SuperSecure123 --dry-run=client -o yaml | kubeseal --controller-name=sealed-secrets-controller --controller-namespace=kube-system --format=yaml > db-creds-sealed.yaml

The resulting db-creds-sealed.yaml is safe to commit. It is useless without the private key residing on your CoolVDS instance.

CI/CD: The Glue

Your CI pipeline (GitHub Actions or GitLab CI) should not touch the cluster. Its job is to build the Docker image, run tests, and if successful, update the image tag in the Git repository. ArgoCD detects the new tag and syncs.

Here is a robust GitHub Action workflow that handles the Git update logic atomically:

name: Build and Update Manifest

on:
  push:
    branches:
      - main

jobs:
  build-push:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v4

      - name: Docker Build & Push
        run: |
          docker build -t registry.coolvds.com/app:${{ github.sha }} .
          docker push registry.coolvds.com/app:${{ github.sha }}

      - name: Update Kubernetes Manifest
        uses: actions/checkout@v4
        with:
          repository: my-org/infra-manifests
          token: ${{ secrets.GIT_PAT }}
          path: infra

      - name: Patch Kustomization
        working-directory: infra/apps/payment-service/overlays/prod
        run: |
          kustomize edit set image app=registry.coolvds.com/app:${{ github.sha }}
          git config user.name "CI Bot"
          git config user.email "ci@coolvds.com"
          git add kustomization.yaml
          git commit -m "chore(deploy): update image to ${{ github.sha }}"
          git push

Performance Tuning for the Nordics

Latency kills user experience. If your users are in Oslo, your servers should not be in Frankfurt if you can avoid it. But beyond geography, configuration matters.

When using Nginx Ingress Controller, standard defaults are too conservative for modern hardware. Since CoolVDS provides high-bandwidth interfaces, we need to tune the kernel and Nginx to utilize it.

In your configmap for Nginx:

data: worker-processes: "4" # Match your CoolVDS vCPU count keep-alive: "65" upstream-keepalive-connections: "100" use-forwarded-headers: "true"

Monitoring Reconciliation Latency

You need to know if ArgoCD is falling behind. I use this Prometheus query to alert me if sync operations are taking too longβ€”usually a sign of I/O choking on the etcd volume.

argocd_app_reconcile_duration_seconds_bucket{le="10.0"}

Conclusion

GitOps is not just a workflow; it's a contract between you and your infrastructure. It enforces discipline. By combining the declarative power of ArgoCD with the raw, consistent performance of CoolVDS NVMe instances, you build a platform that withstands the chaos of production.

Stop fixing things manually. Commit the change, merge the PR, and let the automation do the heavy lifting. If you are ready to migrate your Kubernetes workloads to a platform that respects the physics of I/O and the laws of data sovereignty, it is time to look at your hosting provider.

Don't let slow storage bottleneck your reconciliation loops. Deploy a high-performance KVM instance on CoolVDS today and feel the difference.