GitOps Architecture: From Theory to High-Performance Reality
If you are still SSHing into your production servers to run kubectl apply -f, you aren't managing infrastructure; you are playing Russian Roulette with your uptime. In the last decade of systems administration, I have seen more outages caused by "quick hotfixes" and configuration drift than by hardware failures. In 2025, with the complexity of microservices and the strictness of GDPR enforcement here in Europe, manual operations are a liability you cannot afford.
This guide isn't about the philosophy of GitOps. It is about the battle-tested implementation of a pull-based workflow that keeps your state consistent, your audit trails clean for Datatilsynet, and your weekends free.
The Core Problem: Configuration Drift
The moment a developer tweaks a resource limit manually to "fix a slow endpoint," your cluster's reality diverges from your git repository. This drift accumulates. When the node inevitably cycles or the pod restarts, that manual fix vanishes, and the outage returns. GitOps solves this by making the Git repository the single source of truth.
Pro Tip: Ensure your hosting environment supports high I/O throughput for the Etcd database. GitOps operators like ArgoCD aggressively query the cluster state. On standard SATA SSDs, this can lead to latency spikes. We rely on CoolVDS NVMe instances because the random read/write speeds prevent the control plane from choking during high-frequency syncs.
The Stack: Tooling for 2025
For a robust Norwegian stack, we are looking at specific versions stable as of mid-2025:
- Orchestrator: Kubernetes v1.31 (Stable)
- GitOps Operator: ArgoCD v2.12+
- CI Pipeline: GitLab CI (Self-hosted or SaaS)
- OS: Ubuntu 24.04 LTS (Noble Numbat)
Step 1: The Repository Structure
Do not mix your application source code with your infrastructure manifests. Separate them. If you combine them, a CI run for a frontend CSS change might trigger an unnecessary cluster reconciliation loop.
Recommended Structure:
/app-repo: Source code + Dockerfile + Helm Chart (Generic)/infra-repo: Kustomize overlays + ArgoCD Applications
Step 2: The Pull-Based Workflow (ArgoCD)
We prefer the Pull model over the Push model. In a Push model (standard CI), your CI runner needs root access to your cluster (KUBECONFIG credentials). This is a massive security risk. If your CI gets breached, your production environment is compromised.
In the Pull model, the cluster (via ArgoCD) reaches out to the Git repo. The cluster has the keys to read the repo, but the CI system has zero keys to the cluster. This is crucial for compliance with strict European security standards.
Deploying ArgoCD on CoolVDS
First, establish a namespace and apply the manifest. Note that we are using the high-availability (HA) installation manifest, which is mandatory for production.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/ha/install.yaml
Defining the Application
Here is where the magic happens. We define an Application CRD that tells ArgoCD: "Make the cluster look exactly like this folder in this Git repo."
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: payment-service-oslo
namespace: argocd
spec:
project: default
source:
repoURL: 'git@gitlab.com:your-org/infra-repo.git'
targetRevision: HEAD
path: overlays/production-oslo
destination:
server: 'https://kubernetes.default.svc'
namespace: payments
syncPolicy:
automated:
prune: true # Deletes resources not in Git
selfHeal: true # Reverts manual changes automatically
syncOptions:
- CreateNamespace=true
The selfHeal: true flag is the enforcer. If someone manually changes a service port on the cluster, ArgoCD detects the drift and immediately reverts it to the state defined in Git. This provides the immutable infrastructure required for high-security environments.
Step 3: Optimizing for Latency and Performance
Running GitOps workflows generates significant network traffic and disk I/O. The operator is constantly comparing etcd state against Git state. If your Git provider is external (like GitHub) and your servers are in Oslo, latency is usually negligible. However, if you self-host GitLab, you must ensure low latency between your repository server and your Kubernetes cluster.
At CoolVDS, our datacenter is connected directly to the NIX (Norwegian Internet Exchange). This keeps internal traffic within the country, reducing hops and ensuring your sync loops resolve in milliseconds, not seconds.
Tuning Kubelet for High Performance
For the nodes running your workloads, standard configurations often throttle CPU too aggressively. On your CoolVDS node, modify the kubelet config to reserve compute for system daemons, preventing them from starving your application pods.
# /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
memory.available: "100Mi"
nodefs.available: "10%"
kubeReserved:
cpu: "500m"
memory: "500Mi"
systemReserved:
cpu: "500m"
memory: "500Mi"
After editing, restart the kubelet:
sudo systemctl daemon-reload && sudo systemctl restart kubelet
Step 4: The CI/CD Bridge
Your CI pipeline should only do three things: Test code, build the container, and update the Kubernetes manifest version tag. It should never touch the cluster.
Here is a refined GitLab CI job that updates the version tag in the infrastructure repository using kustomize. This commits the change to Git, which ArgoCD then picks up.
update_manifest:
stage: deploy
image: line/kubectl-kustomize:latest
script:
- git config user.email "ci-bot@coolvds.com"
- git config user.name "CI Bot"
- git clone https://oauth2:${CI_TOKEN}@gitlab.com/your-org/infra-repo.git
- cd infra-repo/overlays/production-oslo
- kustomize edit set image your-registry/payment-service:${CI_COMMIT_SHA}
- git add kustomization.yaml
- git commit -m "Bump payment-service to ${CI_COMMIT_SHA}"
- git push origin main
only:
- main
Security & Compliance in Norway
Norwegian businesses face stringent data residency requirements. When you use a GitOps workflow, your Git commit log becomes your audit log. You can prove exactly who changed the infrastructure, when, and why. This is often sufficient to satisfy auditors regarding change management controls.
However, the underlying data must stay secure. Relying on hyperscalers often involves complex legal frameworks regarding data transfer to the US (Schrems II). Using a local provider like CoolVDS ensures that your persistent volumes (PVs) and database storage remain physically located on Norwegian soil, simplifying GDPR compliance significantly.
Comparison: Push vs. Pull
| Feature | Push (Jenkins/GitLab CI direct) | Pull (ArgoCD/Flux) |
|---|---|---|
| Security | Low (CI needs cluster admin keys) | High (Cluster credentials stay inside cluster) |
| Drift Detection | None (One-time apply) | Continuous (Auto-correction) |
| Disaster Recovery | Manual redeploy required | Automatic (Point to new cluster) |
Conclusion
GitOps is not just a trend; it is the standard for reliable systems engineering in 2025. It enforces discipline, secures your credentials, and creates a transparent audit trail. But software is only as good as the hardware it runs on. A sync loop that hangs due to I/O wait times or network packet loss undermines the entire architecture.
For your next Kubernetes deployment, ensure your foundation is solid. Don't let slow I/O kill your SEO or your sync loops. Deploy a test instance on CoolVDS in 55 seconds and experience the difference of local, high-performance NVMe infrastructure.