Console Login

Stop Using `kubectl apply`: Building a Bulletproof GitOps Workflow in 2022

Stop Using kubectl apply: Building a Bulletproof GitOps Workflow in 2022

If you are still SSH-ing into your production server to run git pull or, worse, manually editing YAML files on a live cluster with kubectl edit, you are essentially juggling chainsaws. I have seen entire platforms go dark because a senior engineer missed an indentation level in a config map during a 3 AM hotfix. The era of manual intervention is over. It is dangerous, it is un-auditable, and frankly, it is unprofessional.

In the Norwegian tech scene, where Datatilsynet (The Norwegian Data Protection Authority) is tightening the screws on data integrity following the Schrems II ruling, having a traceable, immutable history of your infrastructure is not just a technical preference—it is a compliance necessity. Enter GitOps.

The "Push" vs. "Pull" Architecture

Most teams start with a "Push" model. Your Jenkins or GitLab CI runner builds a Docker image and then runs a script to deploy it to the server. This works fine for a simple WordPress blog on a shared host. But when you are managing microservices across a cluster, it introduces a security flaw: your CI server needs root-level access (cluster-admin) to your production environment. If your CI gets compromised, your entire infrastructure is gone.

The GitOps "Pull" model flips this. You install an agent (like ArgoCD or Flux) inside your cluster. This agent monitors a Git repository. When it sees a change in the manifest, it pulls the config and applies it. The cluster reaches out; the outside world does not reach in. This drastically reduces your attack surface.

Pro Tip: Network latency kills GitOps sync speeds. If your cluster is in Oslo but your Git repository is hosted in a US-East region, your reconciliation loops can lag. Hosting your Git mirror or utilizing high-performance peering (like NIX) via a provider like CoolVDS ensures that your git fetch operations happen in milliseconds, not seconds.

Tooling Setup: The 2022 Standard

For this architecture, we are using the industry-standard stack available as of mid-2022:

  • Orchestration: Kubernetes v1.24 (The "Stargazer" release)
  • GitOps Controller: ArgoCD v2.4
  • Secret Management: Bitnami Sealed Secrets (because committing raw secrets to Git is a firing offense)
  • Infrastructure: CoolVDS NVMe KVM Instances (Ubuntu 22.04 LTS)

1. The Directory Structure

Structure your Git repository to separate base configurations from environment overlays using Kustomize. This allows you to patch specific values for staging vs. production without duplicating 90% of your YAML.

/deployments
  /base
    deployment.yaml
    service.yaml
    kustomization.yaml
  /overlays
    /staging
      kustomization.yaml
      replica_patch.yaml
    /production
      kustomization.yaml
      resource_limits.yaml

2. Installing ArgoCD on CoolVDS

First, we need the controller running. On your CoolVDS instance, ensure your Kubeconfig is set, then deploy the manifest.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Once the pods are running, you need to access the UI. While you can port-forward, for a permanent setup, you should configure an Ingress. Note that running the ArgoCD repo-server requires decent memory. I recommend a CoolVDS instance with at least 4GB RAM if you are managing more than 20 applications, as the Redis cache needs breathing room.

3. Defining the Application

Instead of clicking through the UI, define your ArgoCD Application as code. This creates the link between your cluster and your Git repo.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nordic-payment-gateway
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:your-org/infra-repo.git'
    targetRevision: HEAD
    path: overlays/production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: payments-prod
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

The selfHeal: true flag is the magic. If someone manually changes a replica count on the cluster, ArgoCD detects the drift and immediately reverts it to the state defined in Git. It is ruthless automated enforcement of your infrastructure policy.

Handling Secrets without Leaking Data

You cannot put DB_PASSWORD in a public or even private Git repo in plain text. In 2022, the most robust pattern is using Sealed Secrets. You encrypt the secret on your laptop using a public key known only to the cluster controller. The controller (running on your secure CoolVDS node) holds the private key to decrypt it.

Here is how you generate a safe secret:

# Create a raw secret (dry-run, do not apply)
kubectl create secret generic db-creds \
  --from-literal=password=SuperSecret123 \
  --dry-run=client -o yaml > secret.yaml

# Seal it using the public key fetched from the controller
kubeseal --format=yaml --cert=pub-cert.pem < secret.yaml > sealed-secret.yaml

You can now commit sealed-secret.yaml to GitHub. If anyone steals it, it is useless blob data without the private key residing inside your CoolVDS cluster.

Infrastructure Considerations: Why I/O Matters

A common misconception is that GitOps agents are lightweight. In a small cluster, sure. But in a production environment with hundreds of resources, the reconciliation loop is CPU and I/O intensive. The controller is constantly serializing YAML, hashing it, and comparing it against the etcd state.

Feature Standard VPS CoolVDS NVMe Instance
Disk I/O SATA SSD / Shared spinning rust Dedicated NVMe
Reconciliation Latency High jitter (Noisy neighbors) Consistent low latency
Etcd Stability Prone to timeout under load High IOPS sustains heavy write loads

I recently migrated a client from a budget host to CoolVDS because their ArgoCD instance kept crashing (OOMKilled) during high-traffic deployments. The underlying storage on the budget host couldn't keep up with the etcd writes required by K8s during the sync. Moving to an environment with dedicated NVMe storage solved the bottleneck instantly.

The Norwegian Compliance Angle

Running GitOps on servers physically located in Norway (or the EEA) simplifies your GDPR posture. When you use US-based managed Kubernetes services, you often have to navigate complex data transfer agreements. By hosting your Kubernetes nodes on CoolVDS, you ensure that the actual processing and the ephemeral storage of secrets happen within the correct legal jurisdiction. This is critical for complying with the strict interpretation of data residency we are seeing from Norwegian authorities this year.

Conclusion

Moving to GitOps requires a shift in mindset. You lose the ability to "quick fix" things manually, but you gain stability, auditability, and sleep. The tools available in 2022—ArgoCD, Kustomize, and Sealed Secrets—are mature enough for banking-grade deployments.

However, your pipeline is only as reliable as the metal it runs on. Don't let slow I/O kill your SEO or your deployment speed. Deploy a high-performance KVM instance on CoolVDS today and build a foundation that can actually handle the load.