GitOps Architectures in 2024: Stop kubectl apply Before You Break Production
If you are still SSH-ing into your jump host to run kubectl apply -f deployment.yaml, you are not managing infrastructure; you are gambling with uptime. I have seen entire clusters in Oslo desynchronize because one senior engineer made a "quick hotfix" at 2 AM and forgot to commit the change to the repo. Two weeks later, the CI pipeline overwrote the fix, the database connection drained, and we spent four hours parsing logs to understand why the application reverted to a version from last month. That is the reality of non-declarative operations. In 2024, with the maturity of tools like ArgoCD and Flux v2, there is zero excuse for manual intervention in your delivery pipeline.
GitOps is not just a buzzword; it is the absolute standard for maintaining sanity in distributed systems, especially when you are dealing with strict data sovereignty requirements here in Norway. When Datatilsynet comes knocking for an audit, showing them a clean Git commit log that correlates 1:1 with your running infrastructure is infinitely better than shrugging and pointing at a terrifying bash history. This guide covers the rigorous workflow patterns we rely on, optimized for high-performance environments like CoolVDS NVMe instances where I/O latency is too low to blame for your slow deployments.
The Core Principle: CI is for Integration, CD is for Reconciliation
A common failure mode I see in junior DevOps teams is overloading the CI pipeline. Jenkins or GitHub Actions should not have direct access to your Kubernetes API server. That is a security nightmare. If your CI runner gets compromised, your entire production cluster is exposed. The "Push" model is dead. We use the "Pull" model.
In a proper GitOps setup, your CI pipeline does exactly two things: it builds the container image, and it updates a manifest in a Git repository. That's it. The cluster controller (ArgoCD) inside your secure environment then pulls that change. This architectural split ensures that your credentials for the cluster never leave the cluster. It also significantly reduces the latency overhead when your infrastructure is hosted locally in Norway, leveraging the NIX (Norwegian Internet Exchange) for rapid image pulls, rather than routing traffic through a US-East load balancer.
The Directory Structure That Scales
Do not dump everything into the root of your repository. I recommend a structure that separates base configuration from environment overlays using Kustomize. This keeps your DRY (Don't Repeat Yourself) principles intact while allowing specific overrides for your staging environment versus your production nodes running on high-performance VPS Norway instances.
Here is the directory structure we deploy for high-availability microservices:
├── apps/
│ ├── base/
│ │ ├── deployment.yaml
│ │ ├── service.yaml
│ │ └── kustomization.yaml
│ └── overlays/
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── patch-replicas.yaml
│ └── production/
│ ├── kustomization.yaml
│ └── patch-resources.yaml
├── cluster-config/
│ ├── namespaces.yaml
│ └── quotas.yaml
└── argocd-apps/
└── production-app.yaml
Implementing the ArgoCD ApplicationSet
Managing individual Application manifests for every microservice is tedious. In 2024, if you aren't using the ApplicationSet controller, you are wasting time. This allows you to automatically generate Argo applications based on the folder structure of your Git repo. It detects new microservices automatically.
Here is a production-ready ApplicationSet configuration. Note the use of the go-template generator which allows for dynamic pathing. This config assumes you are running on a robust control plane. We run this on CoolVDS instances because the etcd I/O requirements for ArgoCD's constant reconciliation loops can crush standard SATA SSDs. With NVMe, the reconciliation is near-instant.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: microservices-set
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- git:
repoURL: https://github.com/your-org/infra-repo.git
revision: HEAD
directories:
- path: apps/overlays/*
template:
metadata:
name: '{{.path.basename}}'
spec:
project: default
source:
repoURL: https://github.com/your-org/infra-repo.git
targetRevision: HEAD
path: '{{.path}}'
destination:
server: https://kubernetes.default.svc
namespace: '{{.path.basename}}'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Pro Tip: Always enableprune: trueandselfHeal: true. Without these, manual changes made to the cluster (drift) will persist, defeating the entire purpose of GitOps. If someone changes a Service type from ClusterIP to NodePort manually, Argo should immediately revert it. Ruthless consistency is the goal.
Handling Secrets Without Exposing Them
You cannot commit raw secrets to Git. That is Security 101. In the Norwegian context, leaking customer data or encryption keys is a direct violation of GDPR Article 32. We rely on the External Secrets Operator (ESO). Unlike SealedSecrets, which can be cumbersome to rotate, ESO fetches secrets from a secure vault (like HashiCorp Vault or Cloud Secret Managers) and injects them as Kubernetes Secrets at runtime.
However, running HashiCorp Vault requires significant resources. It is memory-hungry. This is where the underlying hardware matters. On a CoolVDS KVM slice, we allocate dedicated RAM to the Vault container, ensuring no swap usage (which is a security risk for memory scraping). Here is how you define a `SecretStore` to fetch keys securely:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: security
spec:
provider:
vault:
server: "http://vault.security.svc.cluster.local:8200"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "app-role"
serviceAccountRef:
name: "external-secrets-sa"
The CI/CD Handoff: Updating the Image Tag
So, how does the new code actually get to the cluster? Your CI pipeline needs to commit back to the repo. We use `kustomize edit set image` within the pipeline. This approach is cleaner than using sed or envsubst, which are prone to syntax errors.
In your GitHub Actions workflow (or GitLab CI), the step looks like this:
cd apps/overlays/production
kustomize edit set image my-app=registry.coolvds.com/my-app:${{ github.sha }}
git config user.name "CI Bot"
git commit -am "Update image to ${{ github.sha }}"
git push
This creates an audit trail. You can see exactly which commit triggered the deployment of which image hash. If you need to rollback, you simply git revert that specific commit. ArgoCD detects the change and rolls the cluster back to the previous state. No panic, no manual kubectl wizardry.
Performance Tuning for High-Frequency Syncs
When you have hundreds of applications, the default ArgoCD settings are too conservative. It polls Git every 3 minutes. In a high-velocity DevOps team, waiting 3 minutes is an eternity. You want webhooks to trigger instant refreshes. However, if you rely on polling, you need to tune the `timeout.reconciliation` settings in the `argocd-cm` ConfigMap.
Be warned: aggressive polling increases CPU load and I/O on the controller node. We've seen "cheap" VPS providers throttle CPU (steal time) during these bursts, causing the sync to hang. This is why we insist on CoolVDS for the control plane. The dedicated CPU threads ensure that when 50 microservices need to sync simultaneously, the reconciliation loop finishes in milliseconds, not minutes.
Here is the config map tuning for faster processing:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/part-of: argocd
data:
timeout.reconciliation: "60s"
# High availability settings
status.processors: "20"
operation.processors: "10"
Data Sovereignty and The Norwegian Advantage
Why go through all this trouble with self-hosted GitOps instead of using a managed service? Schrems II. If you are handling Norwegian citizen data, relying on a US-controlled control plane adds legal friction. By hosting your GitOps toolchain (GitLab + ArgoCD) on servers physically located in Oslo (like CoolVDS), and ensuring your backups stay within the EEA, you simplify your GDPR compliance posture significantly. You own the pipe, you own the code, you own the data.
GitOps is the only sane way to manage modern infrastructure. But a robust workflow requires robust hardware. You can have the most beautiful YAML in the world, but if your etcd latency spikes, your cluster stability dissolves. Don't let slow I/O kill your developer experience.
Ready to stabilize your production environment? Deploy a high-performance GitOps control plane on a CoolVDS NVMe instance today and stop fearing Friday deployments.