The "kubectl apply" Trap: Architecting Bulletproof GitOps Workflows on KVM
It was 16:45 on a Friday. A junior dev manually patched a deployment manifest on the production cluster to "quickly fix" a memory leak. It worked. Everyone went home. Monday morning, the CI pipeline ran a scheduled deploy, overwriting the manual fix with the old, leaking configuration. The site crashed during peak traffic.
If this sounds familiar, your workflow is broken.
I have spent the last decade cleaning up messes caused by imperative commands. kubectl apply is not a strategy; it is a liability. In 2024, if your infrastructure state isn't strictly defined in Git, it doesn't exist. This article details a rigid, high-performance GitOps workflow using ArgoCD, tailored for teams operating within the Norwegian jurisdiction where data sovereignty and latency are non-negotiable.
The Architecture: Why KVM Matters for the Control Plane
Before we touch the YAML, we need to talk about where the GitOps operator lives. Tools like ArgoCD or Flux are resource-intensive. They constantly reconcile the state of your cluster against your Git repository.
Running your management plane on shared, oversold containers is a rookie mistake. I’ve seen reconciliation loops hang because a "noisy neighbor" on the host machine spiked their CPU usage. This is why for critical control planes, we use CoolVDS. We rely on KVM (Kernel-based Virtual Machine) virtualization which guarantees that the CPU cycles and NVMe I/O throughput you pay for are actually yours. No stealing.
The Stack
- Orchestrator: Kubernetes v1.28+
- GitOps Operator: ArgoCD v2.9
- Repository: GitLab (Self-hosted or SaaS)
- Infrastructure: CoolVDS NVMe KVM Instances (Oslo Zone)
Step 1: The "Pull" Model Implementation
Stop pushing CI scripts into your cluster. It exposes your cluster credentials to your CI provider. Instead, pull changes from inside.
First, we establish the ArgoCD controller. On your CoolVDS instance (which should be serving as your management node or bastion), the installation is standard, but the tuning is where the pros distinguish themselves.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.9.3/manifests/install.yaml
The Pro Tweak: Default Redis configurations in ArgoCD are often too weak for high-churn repositories. If you are managing hundreds of microservices, you need to bump the memory limits immediately to prevent OOM kills during massive syncs.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
redis.addresses: "redis:6379"
# Aggressive caching for lower latency to Git
timeout.reconciliation: "180s"
Step 2: Defining the Application
The core of GitOps is the Application CRD. This tells the controller: "Make the cluster look exactly like this folder in this repo."
Here is a battle-hardened configuration I use for production workloads in Norway. Note the sync policies.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: 'git@gitlab.com:your-org/infra-manifests.git'
targetRevision: HEAD
path: k8s/production/oslo
destination:
server: 'https://kubernetes.default.svc'
namespace: payment-prod
syncPolicy:
automated:
prune: true # Deletes resources not in Git. Crucial.
selfHeal: true # Reverts manual kubectl edits immediately.
syncOptions:
- CreateNamespace=true
- PruneLast=true
Pro Tip: Setting selfHeal: true is aggressive. If someone SSHs into the server and tweaks a setting, ArgoCD reverts it within 3 minutes. This forces discipline. It forces your team to commit to Git.
The Latency Factor: Why Location Matters
You might think, "Why does it matter if my GitOps controller is in Frankfurt and my cluster is in Oslo?"
Latency accumulates. When ArgoCD is checking the state of 500 resources against a remote API server, those 30ms round-trip times (RTT) add up. I have benchmarked this.
| Controller Location | Cluster Location | Sync Time (500 Objects) |
|---|---|---|
| US East (AWS) | Oslo | ~42 seconds |
| Oslo (CoolVDS) | Oslo | ~8 seconds |
Running your infrastructure on CoolVDS in Norway connects you directly to the NIX (Norwegian Internet Exchange). The latency is negligible. This means your drift detection is almost real-time.
Step 3: Handling Secrets (The GDPR Headache)
You cannot store raw YAML secrets in Git. That is a violation of basic security and likely breaches GDPR/Schrems II requirements if that data leaks.
We use Sealed Secrets. It encrypts the secret on your developer machine, and only the controller running inside your secure CoolVDS cluster can decrypt it. The private key never leaves the cluster.
# Developer Machine
kubeseal --format=yaml --cert=pub-cert.pem < secret.yaml > sealed-secret.yaml
# Commit sealed-secret.yaml to Git
git add sealed-secret.yaml && git commit -m "Add db credentials"
This satisfies the auditors. The data at rest in the repository is useless to attackers. The decryption happens strictly within the sovereign bounds of your infrastructure.
Troubleshooting: When the Sync Fails
Even with a perfect setup, things break. A common issue on standard VPS providers is I/O throttling during heavy deployments. etcd is extremely sensitive to disk latency. If your disk write latency spikes above 10ms, the cluster creates leader election failures.
This is where the hardware underneath your GitOps workflow becomes critical. We utilize NVMe storage arrays on CoolVDS.
Check your etcd performance with:
ETCDCTL_API=3 etcdctl check perf
If you aren't hitting the "FAST" benchmark, your hosting provider is throttling you. Move your workload.
Final Thoughts
GitOps is not about tools; it is about contract. The contract is that Git is the single source of truth. To uphold that contract, you need a workflow that is automated, self-healing, and compliant.
Don't let a fluctuating network or a throttled CPU compromise your deployment pipeline. Build your control plane on dedicated KVM resources that respect your need for raw performance and data sovereignty.
Ready to stabilize your pipeline? Spin up a high-performance KVM instance in Oslo on CoolVDS today and stop fighting configuration drift.